Skip to content

Split vmi controller into smaller, more maintainable parts #14238

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Mar 25, 2025

Conversation

vladikr
Copy link
Member

@vladikr vladikr commented Mar 14, 2025

What this PR does

Currently, the controller was split into three smaller parts: storage.go, lifecycle.go, and vmi.go.
The idea was to isolate storage-related code into storage.go, compute/lifecycle-related code into lifecycle.go, and keep the core controller logic in vmi.go

The next step is to move the storage related code into pkg/storage - but I have concerns about this change. Perhaps separating it into vmi/storage with it own OWNERS file would be a better approach.

Special notes for your reviewer

Release note

NONE

@kubevirt-bot
Copy link
Contributor

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@kubevirt-bot kubevirt-bot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. release-note-none Denotes a PR that doesn't merit a release note. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. labels Mar 14, 2025
@vladikr
Copy link
Member Author

vladikr commented Mar 14, 2025

/cc @xpivarc

@kubevirt-bot kubevirt-bot requested a review from xpivarc March 14, 2025 12:20
@vladikr
Copy link
Member Author

vladikr commented Mar 17, 2025

/test all

@vladikr vladikr force-pushed the split_vmi_controller branch from 5b2571d to 8119bee Compare March 17, 2025 18:01
@vladikr
Copy link
Member Author

vladikr commented Mar 17, 2025

/test all

@vladikr vladikr force-pushed the split_vmi_controller branch from 8119bee to 9710467 Compare March 17, 2025 21:22
@vladikr
Copy link
Member Author

vladikr commented Mar 17, 2025

/test all

@vladikr vladikr marked this pull request as ready for review March 17, 2025 21:23
@dosubot dosubot bot added the sig/storage label Mar 17, 2025
@vladikr vladikr force-pushed the split_vmi_controller branch 2 times, most recently from 646d244 to 07e1084 Compare March 17, 2025 22:32
Copy link
Contributor

@fossedihelm fossedihelm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good overall, thanks!
What I dislike is that now we have storage.go and datavolumes.go files.
And in storage.go, there are some methods about the datavolumes.
Can we merge them? or move dv methods under the right file?
Thank you!

@vladikr
Copy link
Member Author

vladikr commented Mar 18, 2025

Looks good overall, thanks! What I dislike is that now we have storage.go and datavolumes.go files. And in storage.go, there are some methods about the datavolumes. Can we merge them? or move dv methods under the right file? Thank you!

@fossedihelm thanks. I think it's better to separate at this point. Did you mean the datavolume event handlers? (add/update/delete) or anything else?

@fossedihelm
Copy link
Contributor

Looks good overall, thanks! What I dislike is that now we have storage.go and datavolumes.go files. And in storage.go, there are some methods about the datavolumes. Can we merge them? or move dv methods under the right file? Thank you!

@fossedihelm thanks. I think it's better to separate at this point. Did you mean the datavolume event handlers? (add/update/delete) or anything else?

Well, I was referring to the event handlers. But there is a cross-context function, listVMIsMatchingDV (the name is misleading), which is used by both pvc and dv.
I'm still in favor of merging them in the storage file, but if you prefer to keep them separate I'm fine with it :)

@vladikr vladikr force-pushed the split_vmi_controller branch from 07e1084 to da01c3c Compare March 18, 2025 19:38
@fossedihelm
Copy link
Contributor

Thank you @vladikr!
Can you drop the [WIP] from the PR title so that the do-not-merge/work-in-progress label disappears?
/lgtm

@kubevirt-bot kubevirt-bot added the lgtm Indicates that a PR is ready to be merged. label Mar 19, 2025
@vladikr vladikr changed the title [WIP] Split vmi controller into smaller, more maintainable parts Split vmi controller into smaller, more maintainable parts Mar 19, 2025
@vladikr
Copy link
Member Author

vladikr commented Mar 20, 2025

@jean-edouard Can you please take a look?
It moves functions between files but does not make any real functional changes—this will come in the next follow-up.

@vladikr
Copy link
Member Author

vladikr commented Mar 21, 2025

/test pull-kubevirt-e2e-k8s-1.32-sig-storage

Copy link
Contributor

@jean-edouard jean-edouard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/approve

@kubevirt-bot kubevirt-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 21, 2025
@kubevirt-commenter-bot
Copy link

Required labels detected, running phase 2 presubmits:
/test pull-kubevirt-e2e-windows2016
/test pull-kubevirt-e2e-kind-1.30-vgpu
/test pull-kubevirt-e2e-kind-sriov
/test pull-kubevirt-e2e-k8s-1.32-ipv6-sig-network
/test pull-kubevirt-e2e-k8s-1.30-sig-network
/test pull-kubevirt-e2e-k8s-1.30-sig-storage
/test pull-kubevirt-e2e-k8s-1.30-sig-compute
/test pull-kubevirt-e2e-k8s-1.30-sig-operator
/test pull-kubevirt-e2e-k8s-1.31-sig-network
/test pull-kubevirt-e2e-k8s-1.31-sig-storage
/test pull-kubevirt-e2e-k8s-1.31-sig-compute
/test pull-kubevirt-e2e-k8s-1.31-sig-operator

@fossedihelm
Copy link
Contributor

Quite strange how the storage is failing even if the changes just involve function moving and minor refactorings.

@jean-edouard
Copy link
Contributor

/hold

@kubevirt-bot kubevirt-bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Mar 21, 2025
@vladikr
Copy link
Member Author

vladikr commented Mar 21, 2025

yup, something is not right with this test...

@vladikr
Copy link
Member Author

vladikr commented Mar 21, 2025

@jean-edouard the test itself is passing just fine. Looks like the cleanup is causing the issue.

@vladikr vladikr force-pushed the split_vmi_controller branch from da01c3c to 02c12a2 Compare March 22, 2025 16:33
@kubevirt-bot kubevirt-bot removed the lgtm Indicates that a PR is ready to be merged. label Mar 22, 2025
@kubevirt-bot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: jean-edouard

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link
Contributor

@iholder101 iholder101 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot @vladikr! Looks awesome overall.

@@ -375,3 +375,38 @@ func (c *Controller) getFilesystemOverhead(pvc *k8sv1.PersistentVolumeClaim) (vi
}
return storagetypes.GetFilesystemOverhead(pvc.Spec.VolumeMode, pvc.Spec.StorageClassName, cdiConfig)
}

func (c *Controller) syncVolumesUpdate(vmi *virtv1.VirtualMachineInstance) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tiny nit: this belongs to the previous commit.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@@ -747,7 +747,7 @@ var _ = Describe(SIG("Volumes update with migration", decorators.RequiresTwoSche
return controller.NewVirtualMachineConditionManager().HasCondition(
vm, virtv1.VirtualMachineManualRecoveryRequired)
}).WithTimeout(120 * time.Second).WithPolling(time.Second).Should(BeFalse())
libwait.WaitForVMIPhase(vmi, []v1.VirtualMachineInstancePhase{v1.Running})
Eventually(matcher.ThisVMI(vmi), 360*time.Second, time.Second).Should(matcher.BeInPhase(v1.Running))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I fail to understand how this is related.
Should these failures concern us that some behavior was changed in this PR?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@iholder101 no, the WaitForVMIPhase looks at weird warnings and fails on them. For example `Unexpected Warning event received: testvmi-klbkg,f35c019b-b555-4665-a08f-ea219efa6a63: Failed to delete virtual machine pod virt-launcher-testvmi-klbkg-qn8zd
Obviously, all pods are deleted in this test, and it is complete. I think the warning is related to garbage collection. I was watching the pods manually, there were no delays or failures.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is most certainly not related to your changes, but still concerning.
I've opened an issue to track this: #14326.

vladikr added 5 commits March 23, 2025 13:18

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
This move storage related functions and event handlers to storage.go

Signed-off-by: Vladik Romanovsky <vromanso@redhat.com>
Signed-off-by: Vladik Romanovsky <vromanso@redhat.com>
This moves all lifecycle related functions and event handlers into lifecycle.go

Signed-off-by: Vladik Romanovsky <vromanso@redhat.com>
Signed-off-by: Vladik Romanovsky <vromanso@redhat.com>
Signed-off-by: Vladik Romanovsky <vromanso@redhat.com>
@vladikr vladikr force-pushed the split_vmi_controller branch from 02c12a2 to db53a5d Compare March 23, 2025 17:18
@fossedihelm
Copy link
Contributor

/lgtm
Thanks!
I leave it to @iholder101 to unhold if he thinks it's ok

@kubevirt-bot kubevirt-bot added the lgtm Indicates that a PR is ready to be merged. label Mar 24, 2025
@kubevirt-commenter-bot
Copy link

Required labels detected, running phase 2 presubmits:
/test pull-kubevirt-e2e-windows2016
/test pull-kubevirt-e2e-kind-1.30-vgpu
/test pull-kubevirt-e2e-kind-sriov
/test pull-kubevirt-e2e-k8s-1.32-ipv6-sig-network
/test pull-kubevirt-e2e-k8s-1.30-sig-network
/test pull-kubevirt-e2e-k8s-1.30-sig-storage
/test pull-kubevirt-e2e-k8s-1.30-sig-compute
/test pull-kubevirt-e2e-k8s-1.30-sig-operator
/test pull-kubevirt-e2e-k8s-1.31-sig-network
/test pull-kubevirt-e2e-k8s-1.31-sig-storage
/test pull-kubevirt-e2e-k8s-1.31-sig-compute
/test pull-kubevirt-e2e-k8s-1.31-sig-operator

@iholder101
Copy link
Contributor

Thank you very much @vladikr and everyone else involved!
/hold cancel

@kubevirt-bot kubevirt-bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Mar 24, 2025
@iholder101
Copy link
Contributor

FYI @ShellyKa13 @akalenyu

@vladikr
Copy link
Member Author

vladikr commented Mar 24, 2025

/retest

@kubevirt-commenter-bot
Copy link

/retest-required
This bot automatically retries required jobs that failed/flaked on approved PRs.
Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@kubevirt-bot kubevirt-bot merged commit 4d52739 into kubevirt:main Mar 25, 2025
40 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/controller dco-signoff: yes Indicates the PR's author has DCO signed all their commits. lgtm Indicates that a PR is ready to be merged. release-note-none Denotes a PR that doesn't merit a release note. sig/compute sig/storage size/XXL
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants