Implement Kubernetes network policies with a deny-all baseline, then explicitly allow required pod-to-pod communication to reduce attack surface. When development teams work remotely, securing Kubernetes clusters becomes critical—network policies control traffic flow between pods, protecting clusters from distributed access points and devices. This guide walks through implementing effective network policies tailored for remote team environments, including baseline deny-all policies, egress/ingress rules, and practical YAML configurations.
Understanding Kubernetes Network Policies
Kubernetes network policies function as firewall rules for your pod-to-pod communication. By default, Kubernetes allows all traffic between pods, which creates a significant security gap, especially in multi-tenant or distributed team setups. Network policies enable you to explicitly define which pods can communicate with each other, reducing the attack surface significantly.
A network policy consists of three main components: pod selection, ingress rules defining allowed incoming traffic, and egress rules defining allowed outgoing traffic. When you apply a policy, only the traffic matching your specified rules is permitted—all other traffic gets blocked.
Baseline Policy for Remote Team Clusters
Start with a deny-all policy as your foundation, then explicitly allow only required communication paths. This zero-trust approach ensures that new pods cannot communicate until you explicitly permit it.
Create a file named default-deny-all.yaml:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: default
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
Apply this policy using kubectl:
kubectl apply -f default-deny-all.yaml
After applying the deny-all policy, test that pods cannot communicate. You should see connection timeouts when attempting to access services that haven’t been explicitly allowed.
Implementing Namespace Isolation
Remote teams often share clusters across multiple projects or environments. Namespace-based isolation provides a logical separation that network policies can enforce. Create policies that restrict traffic between namespaces while permitting necessary communication.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: namespace-isolation
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: production
- podSelector: {}
This policy allows traffic only within the production namespace. Remote team members working on staging or development environments cannot accidentally or intentionally access production resources.
Protecting Sensitive Services
Your cluster likely contains services that require stricter access controls—databases, authentication services, or internal APIs. Create dedicated policies for these critical components.
For a database pod that should only accept connections from application pods:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: database-access
namespace: production
spec:
podSelector:
matchLabels:
app: database
role: primary
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: backend
role: application
ports:
- protocol: TCP
port: 5432
Label your application pods accordingly:
kubectl label pods/backend-xyz app=backend role=application -n production
Egress Control for Remote Workers
Remote team members sometimes run local development environments that need cluster access. Egress policies prevent compromised or unauthorized pods from exfiltrating data to external servers.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: restrict-egress
namespace: production
spec:
podSelector:
matchLabels:
app: sensitive-workload
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: approved-service
ports:
- protocol: TCP
port: 443
- to:
- namespaceSelector: {}
ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
This policy allows the sensitive workload to communicate only with approved services and DNS, blocking all other outbound connections.
Enabling DNS and Essential Services
Every pod needs DNS resolution and often requires access to external APIs for legitimate purposes. Create a policy that allows essential outbound traffic:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-essentials
namespace: production
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
- 172.16.0.0/12
- 192.168.0.0/16
ports:
- protocol: TCP
port: 443
This policy permits DNS queries to the kube-system namespace and HTTPS traffic to public IP addresses only, blocking private network access.
Testing Your Policies
After applying network policies, verify they work as expected. Use a debug pod to test connectivity:
kubectl run debug-pod --image=busybox:1.36 --restart=Never -- sleep 3600
kubectl exec -it debug-pod -- wget -qO- http://service-name.namespace.svc.cluster.local
The connection should fail if no policy permits it. Check the policy status and adjust rules accordingly.
Remote Team Workflow Considerations
Network policy management requires coordination when your engineering team is distributed. A few patterns that work well for remote teams:
Policy-as-Code with Git Review
Store all network policy YAML files in a dedicated directory in your infrastructure repository. Require pull request reviews from at least one other engineer before applying any policy change to production. This async review process works naturally for distributed teams and creates a full audit history of every security decision.
Structure your repository like this:
k8s/
network-policies/
base/
default-deny-all.yaml
allow-essentials.yaml
production/
database-access.yaml
namespace-isolation.yaml
staging/
staging-allow-cross-namespace.yaml
Use Kustomize overlays to apply environment-specific policies without duplicating base configurations. This keeps staging and production consistent while allowing different access patterns where needed.
Coordinating Policy Rollouts Across Timezones
Remote teams spanning multiple timezones face a coordination challenge when rolling out security changes: a policy applied during one engineer’s afternoon may break services for colleagues who start their day hours later. Establish two rules:
- Apply policy changes during an agreed overlap window when multiple team members are online
- Use a staging namespace to validate policy behavior before touching production
A staging validation run looks like this:
# Apply to staging first
kubectl apply -f network-policies/ -n staging
# Run integration tests against staging
./scripts/smoke-test.sh staging
# Wait for async approval from team before prod
Configure your CI pipeline to automate staging validation and post results to your team’s async communication channel before any production apply is considered.
Comparing Network Policy Tools
The standard Kubernetes NetworkPolicy resource covers most use cases, but several tools extend what is possible:
| Tool | Strengths | Remote Team Fit |
|---|---|---|
| Standard NetworkPolicy | Universal, supported everywhere | Good baseline for all teams |
| Calico | Rich L7 policies, global network sets | Strong for multi-cloud remote teams |
| Cilium | eBPF-based, service mesh integration | Best observability for distributed debugging |
| Antrea | VMware integration, flow export | Good for on-prem hybrid teams |
Cilium deserves particular attention for remote teams. Its Hubble observability layer visualizes real-time traffic flows across your cluster, which is invaluable when a distributed team needs to diagnose why a service cannot reach another without physically being in the same room. Engineers can share Hubble dashboard links rather than coordinating live kubectl sessions.
Monitoring and Maintenance
Network policies require ongoing attention as your applications evolve. Review policy logs regularly and update rules when adding new services. Document your policy decisions so remote team members understand the security boundaries.
Consider using tools like Calico or Cilium that provide enhanced network policy capabilities beyond the Kubernetes specification, including more sophisticated traffic matching and visualization.
Schedule a monthly async review where engineers post any observed policy gaps or unnecessary restrictions to a shared document. This keeps security posture current without requiring synchronous meetings and gives every team member — regardless of timezone — a voice in how the cluster is protected.
Frequently Asked Questions
Do network policies work on managed Kubernetes services like EKS, GKE, or AKS?
Yes, but you need to verify the CNI plugin supports network policies. EKS requires installing a supported CNI like Calico alongside the default aws-node plugin. GKE and AKS both support network policies natively when enabled during cluster creation. Check your provider’s documentation before assuming policies are enforced.
Can network policies block traffic from cluster administrators?
No. Network policies apply to pod-to-pod traffic, not to kubectl or direct API server access. a user with kubectl access and the right RBAC permissions can still interact with any pod regardless of network policies. Network policies and RBAC are complementary controls — you need both.
What happens when two conflicting policies apply to the same pod?
Kubernetes applies a union of all matching policies. If any policy permits the traffic, it is allowed. There is no deny priority — only explicit allows. This means your deny-all policy blocks traffic by default, and any subsequent policy that permits specific traffic takes effect additively. You cannot write a policy that overrides a more permissive one.
Related Articles
- Check your router’s current firmware version
- How to Secure Remote Team CI/CD Pipeline From Supply Chain
- teleport-db-config.yaml
- How to Secure Slack and Teams Channels for Remote Team
- Deploy a secure Element (Matrix) server for pen test
Built by theluckystrike — More at zovo.one