A Kubernetes operator that manages NFS servers as custom resources, providing dynamic provisioning and lifecycle management of NFS services within your cluster.
- Custom Resource Definition (CRD): Define NFS servers declaratively using Kubernetes resources
- Dynamic Provisioning: Automatically provision NFS servers with persistent storage
- Lifecycle Management: Handle creation, updates, and deletion of NFS server instances
- Storage Flexibility: Support for both StorageClass-based and pre-existing PersistentVolume storage
- High Availability: Configurable replica count for NFS server instances
- Service Discovery: Automatic service creation for NFS server connectivity
- Status Monitoring: Real-time status updates and health checks
- Kubernetes cluster (v1.20+)
- kubectl configured to access your cluster
- Cluster admin permissions
-
Install the CRDs and operator:
kubectl apply -f https://github.com/sharedvolume/nfs-server-controller/releases/latest/download/install.yaml
-
Verify the installation:
kubectl get deployment -n nfs-server-controller-system kubectl get crd nfsservers.sharedvolume.io
Create an NFS server using a StorageClass:
apiVersion: sharedvolume.io/v1alpha1
kind: NfsServer
metadata:
name: my-nfs-server
namespace: default
spec:
storage:
capacity: "10Gi"
storageClassName: "fast-ssd"
replicas: 2
path: "/shared"Apply the configuration:
kubectl apply -f nfs-server.yamlOnce the NFS server is running, you can mount it in your pods:
apiVersion: v1
kind: Pod
metadata:
name: nfs-client
spec:
containers:
- name: app
image: nginx
volumeMounts:
- name: nfs-storage
mountPath: /data
volumes:
- name: nfs-storage
nfs:
server: my-nfs-server.default.svc.cluster.local
path: /shared| Field | Type | Description | Required |
|---|---|---|---|
storage.capacity |
string | Storage capacity (e.g., "10Gi") | Yes |
storage.storageClassName |
string | StorageClass name for dynamic provisioning | No* |
storage.persistentVolume |
string | Pre-existing PersistentVolume name | No* |
replicas |
int32 | Number of NFS server replicas (default: 2) | No |
path |
string | NFS export path (default: "/nfs") | No |
image |
string | NFS server image (default: auto-detected) | No |
*Either storageClassName or persistentVolume must be specified, but not both.
apiVersion: sharedvolume.io/v1alpha1
kind: NfsServer
metadata:
name: nfs-with-pv
spec:
storage:
capacity: "50Gi"
persistentVolume: "my-existing-pv"
replicas: 1apiVersion: sharedvolume.io/v1alpha1
kind: NfsServer
metadata:
name: custom-nfs
spec:
storage:
capacity: "20Gi"
storageClassName: "standard"
image: "sharedvolume/nfs-server:custom"
path: "/exports"
replicas: 3- Go 1.24+
- Docker
- kubectl
- Kind (for local testing)
-
Clone the repository:
git clone https://github.com/sharedvolume/nfs-server-controller.git cd nfs-server-controller -
Build the manager:
make build
-
Run tests:
make test -
Build Docker image:
make docker-build IMG=nfs-server-controller:dev
-
Install CRDs:
make install
-
Run the controller locally:
make run
-
Run tests with Kind:
make test-e2e
We welcome contributions! Please see our Contributing Guidelines for details on:
- Code of conduct
- Development setup
- Pull request process
- Testing requirements
For detailed development information, including how this project was built with Kubebuilder, see our Development Guide.
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Ensure all tests pass
- Submit a pull request
The NFS Server Controller consists of:
- Custom Resource Definition (CRD): Defines the
NfsServerresource schema - Controller: Watches for
NfsServerresources and manages their lifecycle - Reconciler: Ensures the desired state matches the actual state by creating/updating:
- PersistentVolumeClaims for storage
- ReplicaSets for NFS server pods
- Services for network access
The controller provides the following status information:
kubectl get nfsservers
NAME READY ADDRESS CAPACITY
my-nfs-server true my-nfs-server.default.svc.cluster.local 10GiFor detailed status:
kubectl describe nfsserver my-nfs-server-
NFS Server not ready:
- Check PVC status:
kubectl get pvc - Verify storage class exists:
kubectl get storageclass - Check pod logs:
kubectl logs -l app=my-nfs-server
- Check PVC status:
-
Mount issues from clients:
- Ensure NFS client utilities are installed in client pods
- Verify network policies allow NFS traffic
- Check service endpoints:
kubectl get endpoints my-nfs-server
-
Permission issues:
- Verify the controller has proper RBAC permissions
- Check if security policies allow privileged containers
View controller logs:
kubectl logs -n nfs-server-controller-system deployment/nfs-server-controller-manager- NFS server pods run with privileged security context (required for NFS functionality)
- Ensure proper network policies to restrict NFS access
- Consider using storage encryption for sensitive data
- Regularly update the NFS server image for security patches
This project is licensed under the MIT License - see the LICENSE file for details.
- Issues: Report bugs and feature requests on GitHub Issues
- Discussions: Join community discussions on GitHub Discussions
- Security: Report security vulnerabilities privately to [email protected]
Built with Kubebuilder and inspired by the Kubernetes community's best practices for operators.