-
Notifications
You must be signed in to change notification settings - Fork 40.6k
show namespace on delete #126619
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
show namespace on delete #126619
Conversation
Please note that we're already in Test Freeze for the Fast forwards are scheduled to happen every 6 hours, whereas the most recent run was: Fri Aug 9 23:05:00 UTC 2024. |
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Welcome @totegamma! |
Hi @totegamma. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: totegamma The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/ok-to-test |
/test pull-kubernetes-e2e-gce |
How about this UX: $ kubectl delete -f ... --dry-run=server
deployment.apps "myapp" deleted from default namespace (server dry run)
$
|
@sftim Thank you for your suggestion. Indeed, I was slightly concerned that the order of resource names might change, so it seems like a good idea from the clarity standpoint as well! I'll go ahead and make the change. |
e36689d
to
5b78d69
Compare
Oh, I noticed that there is another PR #126626 addressing the same issue that I created. By the way, in the original issue, I mentioned only the dry-run stage. However, while creating this PR, I realized that it would be fine to print the message during the non-dry-run stage as well. So, I made that change in this PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, LGTM
I don't like the tests to be separate like this but i don't think this PR is the place for fix it.
PTAL @ardaguclu
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
Hi @ah8ad3 and @ardaguclu, Thank you for the previous review and feedback! I noticed that this PR has been marked as stale, and I wanted to check in to ensure it doesn’t get closed inadvertently. Could you please take another look when you have time? If there are any blockers or further improvements needed, I’d be happy to work on them. Thanks again for your time and guidance! |
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
I still believe this change, while not large in scope, offers significant benefits. If there are any concerns or areas needing clarification, I’d be happy to address them. Please let me know if there’s anything I can do to help move this forward. |
What type of PR is this?
/kind feature
What this PR does / why we need it:
Included namespace in the output of the kubectl delete command.
current:
changed:
(Updated based on feedback)
Why is this needed:
The current output is ambiguous, as the resource name should be identifiable as unique.
When working with multiple namespaces like "myapp-prod" and "myapp-dev", and intending to tear down some resources in "myapp-dev", the command might look like this:
From this output, it's unclear whether the manifest targets "myapp-dev" or "myapp-prod". This ambiguity requires additional checks to ensure the correct namespace is being targeted.
Printing the namespace in the dry-run output would enhance clarity and confidence in identifying the targeted resources.
Other considerations
This change can be applied for other operations like apply and replace. However, non-delete operations can be validate with the "diff" command.
Therefore, I think it is acceptable to add this feature only for the delete operation.
Which issue(s) this PR fixes:
Fixes kubernetes/kubectl#1621
Does this PR introduce a user-facing change?