Skip to content

Add disk command and allows multiple additional disks #1065

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Nov 12, 2022

Conversation

sam-berning
Copy link
Contributor

#722

After looking through the discussion on #759, I implemented a basic volume command. Volumes are stored under a _volumes directory under $LIMA_HOME and can be attached by name to an instance in its LimaYAML.

Also added a boot script that performs some setup for volumes. So far, it creates a filesystem labeled with the volume name if it's the first time the volume has been attached to an instance. In the future, we could expand it to bind-mount the volume to specific directories.

@jandubois
Copy link
Member

Please fix linter errors (run make lint locally to verify).

}
return err
}
if err := deleteVolume(vol); err != nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you should call LockVolume first to make sure you are not deleting a volume that is in use by a running instance.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just realized that LockVolume would require an instance here, so vol.InspectVolume() should be called instead and must return an error before you can call deleteVolume().

Copy link
Member

@jandubois jandubois left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you, this is very useful. I'm done with this round of reviewing, but have some feedback that should be addressed.

@jandubois
Copy link
Member

Please rebase the PR to resolve merge conflicts!

@AkihiroSuda AkihiroSuda added this to the v0.14 (tentative) milestone Oct 19, 2022
@AkihiroSuda
Copy link
Member

AkihiroSuda commented Oct 21, 2022

Please squash commits and make sure to keep the PR title consistent with the actual code

@jandubois jandubois changed the title Adds volume command and allows multiple disks Add disk command and allows multiple additional disks Oct 21, 2022
@jandubois
Copy link
Member

This PR also needs to be rebased on latest master so it can pass integration tests.

@sam-berning sam-berning force-pushed the volume branch 3 times, most recently from 6aff1bb to a34e0cc Compare November 3, 2022 20:36
AkihiroSuda
AkihiroSuda previously approved these changes Nov 9, 2022
Copy link
Member

@AkihiroSuda AkihiroSuda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, LGTM but needs rebase again.

The locking mechanism seems still racy, but can be addressed in another PR

Comment on lines +47 to +52
if errors.Is(err, fs.ErrNotExist) {
disk.Instance = ""
disk.InstanceDir = ""
} else {
return nil, err
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The struct is already initialized with the null values, so the explicit assignment seems redundant:

Suggested change
if errors.Is(err, fs.ErrNotExist) {
disk.Instance = ""
disk.InstanceDir = ""
} else {
return nil, err
}
if !errors.Is(err, fs.ErrNotExist) {
return nil, err
}

return fmt.Errorf("disk %q already exists (%q)", name, diskDir)
}

logrus.Infof("Creating a disk %q", name)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe include the disk size in the log, so show how the size was parsed:

Suggested change
logrus.Infof("Creating a disk %q", name)
logrus.Infof("Creating disk %q size %s", name, units.BytesSize(float64(diskSize)))

`,
Short: "List existing Lima disks",
Aliases: []string{"ls"},
Args: cobra.NoArgs,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thought for a later PR (low priority):

I think it would be good to allow disk names to be passed as args (for symmetry with limactl ls, which allows instance names), and then only emit the data for those disk images.

Args: cobra.MinimumNArgs(1),
RunE: diskDeleteAction,
}
diskDeleteCommand.Flags().Bool("force", false, "force delete")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The option should allow -f shorthand, just like limactl delete -f INSTANCE:

Suggested change
diskDeleteCommand.Flags().Bool("force", false, "force delete")
diskDeleteCommand.Flags().BoolP("force", "f", false, "force delete")

return err
}
if disk.Instance != "" {
return fmt.Errorf("cannot delete disk %q in use by instance %q", disk.Name, disk.Instance)
Copy link
Member

@jandubois jandubois Nov 12, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It feels inconsistent to me that a locked disk will abort the command, but a non-existing disk is simply skipped:

limactl disk rm foo bar

If foo doesn't exist, bar will still be deleted. But if foo exists and is locked, then bar will not be deleted (and there is no message to the effect that additional disks have been skipped due to the error).

I think a locked disk should be logged as an error, but additional disks should still be processed.

And then I wonder if failures of deleteDisk() should be treated the same way too: log an error, but continue to attempt deleting the remaining disks?

Of course the command must exit with a non-zero status code if any disk couldn't be deleted. I'm not sure how non-existing disks should be treated: should they also cause a non-zero error?

@AkihiroSuda WDYT?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you open an issue for this? I think we should keep this behavior consistent between limactl disk delete and limactl delete, and I think this needs some more discussion.

Comment on lines +210 to +213
instances, err := store.Instances()
if err != nil {
return err
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be moved outside the for loop:

		instances, err := store.Instances()
		if err != nil {
			return err
		}
		for _, diskName := range args {
			...
		}

Although the "expensive" operation is store.Inspect(instName), so those would have to be cached outside the loop as well. Maybe not worth the extra code.

@@ -78,6 +82,14 @@ export ftp_proxy=http://localhost:2121

INFO "Starting \"$NAME\" from \"$FILE\""
defer "limactl delete -f \"$NAME\""

if [[ -n ${CHECKS["disk"]} ]]; then
if ! limactl disk ls | grep -q "^data\s"; then
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an example where it would be useful to be able to pass disk names to the ls command:

if ! limactl disk ls data; then
   ...
fi

return "", nil, err
}
logrus.Infof("Mounting disk %q on %q", diskName, d.MountPoint)
err = d.Lock(cfg.InstanceDir)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there are any errors from here until ha.Run() is called, then we won't unlock the disk and the user will have to manually limactl disk unlock DISK before being able to use it again. It would be better to have some "unlock on error" handling somewhere in hostagentAction.

Copy link
Member

@jandubois jandubois left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've "resolved" all existing discussions, as they all seem to have been implemented.

I've done one more round of reviews, and found a few minor issues. I'm fine with merging this PR as-is, and addressing the issues in a follow, because we have a bunch of large PRs piled up now, and we will get more and more merge conflicts until we start merging the big pieces.

@sam-berning So I'm going to merge this PR, but please look at all unresolved feedback, and either address in a follow-PR, or let me know, and I'll create an open issue for it!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request impact/changelog
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants