Deploy SFTP Gateway on Kubernetes
TLDR - Quick Summary
What: Deploy SFTP Gateway as a containerized app on GKE Autopilot with Cloud SQL
Steps: Create Cloud SQL instance, configure Workload Identity, apply kubernetes-manifest.yaml
Overview
In this article we'll walk through deploying SFTP Gateway as a containerized application. The SFTP Gateway container images are available on Docker Hub, so you can pull them directly without needing to download and load image files.
A kubernetes manifest file will reference the Docker Hub images and deploy the SFTP Gateway containers.
Afterwards, you're able to access your deployment by navigating to your web browser and creating your initial Web Admin account.
Docker Hub Images
The SFTP Gateway container images are available on Docker Hub:
- Backend:
thorntech/sftpgateway-backend:latest - Admin UI:
thorntech/sftpgateway-admin-ui:latest
You can pull these images directly:
docker pull thorntech/sftpgateway-backend:latest
docker pull thorntech/sftpgateway-admin-ui:latest
Prerequisites: Set up Cloud SQL and Workload Identity
Before deploying SFTP Gateway, you need to create a Cloud SQL PostgreSQL instance and configure Workload Identity for secure database access.
Step 1: Create a GKE Autopilot cluster
gcloud container clusters create-auto sftpgw-cluster \
--region=us-central1 \
--project=YOUR_PROJECT_ID
Step 2: Create a Cloud SQL PostgreSQL instance
# Create the Cloud SQL instance
gcloud sql instances create sftpgw-db \
--database-version=POSTGRES_15 \
--tier=db-f1-micro \
--region=us-central1 \
--project=YOUR_PROJECT_ID
# Set the postgres user password
gcloud sql users set-password postgres \
--instance=sftpgw-db \
--password=YOUR_SECURE_PASSWORD \
--project=YOUR_PROJECT_ID
# Create the sftpgw database user
gcloud sql users create sftpgw \
--instance=sftpgw-db \
--password=YOUR_DATABASE_PASSWORD \
--project=YOUR_PROJECT_ID
# Create the sftpgw database
gcloud sql databases create sftpgw \
--instance=sftpgw-db \
--project=YOUR_PROJECT_ID
Note: Save the database password - you'll need it for the Kubernetes manifest.
Step 3: Set up Workload Identity
Workload Identity allows GKE pods to authenticate to Cloud SQL without storing credentials.
# Create a GCP service account for Cloud SQL access
gcloud iam service-accounts create sftpgw-cloudsql \
--display-name="SFTP Gateway Cloud SQL" \
--project=YOUR_PROJECT_ID
# Grant Cloud SQL Client role
gcloud projects add-iam-policy-binding YOUR_PROJECT_ID \
--member="serviceAccount:sftpgw-cloudsql@YOUR_PROJECT_ID.iam.gserviceaccount.com" \
--role="roles/cloudsql.client"
# Allow the Kubernetes service account to impersonate the GCP service account
gcloud iam service-accounts add-iam-policy-binding \
sftpgw-cloudsql@YOUR_PROJECT_ID.iam.gserviceaccount.com \
--role="roles/iam.workloadIdentityUser" \
--member="serviceAccount:YOUR_PROJECT_ID.svc.id.goog[sftpgw/sftpgw-sa]" \
--project=YOUR_PROJECT_ID
Step 4: Get the Cloud SQL connection name
gcloud sql instances describe sftpgw-db --project=YOUR_PROJECT_ID --format="value(connectionName)"
This will return something like YOUR_PROJECT_ID:us-central1:sftpgw-db. You'll need this for the manifest.
Step 5: Generate Security Credentials and Secrets
Before deploying, you must generate secure values for authentication and TLS. Run the following commands on your local workstation or in Cloud Shell.
Generate security credentials
# Generate security credentials
SECURITY_CLIENT_ID=$(openssl rand -hex 16)
SECURITY_CLIENT_SECRET=$(openssl rand -hex 32)
SECURITY_JWT_SECRET=$(uuidgen 2>/dev/null || cat /proc/sys/kernel/random/uuid 2>/dev/null || openssl rand -hex 16)
# Verify the values were generated
echo "CLIENT_ID: $SECURITY_CLIENT_ID"
echo "CLIENT_SECRET: $SECURITY_CLIENT_SECRET"
echo "JWT_SECRET: $SECURITY_JWT_SECRET"
Generate TLS certificate
Generate a self-signed certificate for the admin UI (or use your own certificate):
openssl req -x509 -newkey rsa:2048 -keyout tls.key -out tls.crt \
-days 365 -nodes -subj "/CN=sftpgw-ui"
Create Kubernetes namespace and secrets
# Create namespace
kubectl create namespace sftpgw
# Create secrets from generated values
kubectl create secret generic sftpgw-secrets -n sftpgw \
--from-literal=SECURITY_CLIENT_ID="$SECURITY_CLIENT_ID" \
--from-literal=SECURITY_CLIENT_SECRET="$SECURITY_CLIENT_SECRET" \
--from-literal=SECURITY_JWT_SECRET="$SECURITY_JWT_SECRET" \
--from-literal=DB_PASSWORD="YOUR_DATABASE_PASSWORD"
# Create TLS secret from generated certificates
kubectl create secret generic sftpgw-ui-tls -n sftpgw \
--from-file=tls.crt=tls.crt \
--from-file=tls.key=tls.key
# Clean up local certificate files
rm -f tls.crt tls.key
Important: Replace YOUR_DATABASE_PASSWORD with the Cloud SQL database password you created in Step 2.
Verify secrets were created
kubectl get secrets -n sftpgw
# Expected output:
# NAME TYPE DATA AGE
# sftpgw-secrets Opaque 4 10s
# sftpgw-ui-tls Opaque 2 10s
Step 6: Apply the Kubernetes manifest
On your local workstation (or in Cloud Shell), create the following file:
kubernetes-manifest.yaml
(See the contents of this file at the bottom of this document)
The manifest file is pre-configured to use the Docker Hub images:
- Backend image:
thorntech/sftpgateway-backend:latest - Admin UI image:
thorntech/sftpgateway-admin-ui:latest
Required Configuration
Before applying the manifest, update the following values:
- Cloud SQL connection name: Replace
YOUR_PROJECT_ID:us-central1:sftpgw-dbin thecloud-sql-proxycontainer args with your actual Cloud SQL connection name (from Step 4). - Workload Identity annotation: Replace
YOUR_PROJECT_IDin the ServiceAccount annotation with your GCP project ID.
The manifest references the secrets you created in Step 5, so no need to edit security credentials or certificates.
Deploy the manifest
kubectl apply -f kubernetes-manifest.yaml
What did I just deploy
The kubernetes manifest file deploys the following resources:
Pods:
sftpgw-ui: The web admin portal running on nginxsftpgw-backend: The SFTP server with Cloud SQL Auth Proxy sidecar- Main container: Java backend handling SFTP and API
- Sidecar container: Cloud SQL Auth Proxy for secure database connectivity
Services:
sftpgw-ui(LoadBalancer): Public IP for the web admin portalsftpgw-backend(ClusterIP): Internal service for backendsftpgw-backend-lb(LoadBalancer): Public IP for SFTP connections (single-replica setup)sftpgw-sftp-neg(ClusterIP + NEG): Backend for TCP Proxy LB (multi-replica setup, see TCP Proxy Load Balancer Setup)
Database:
- Cloud SQL PostgreSQL (managed service, not a pod)
Run the following command to get a list of pods:
kubectl get pods -n sftpgw
NAME READY STATUS RESTARTS AGE
sftpgw-backend-79fdc45c97-grr8b 2/2 Running 0 5m
sftpgw-ui-76d6d557c7-bvkl7 1/1 Running 0 5m
Note: The backend pod shows 2/2 because it has two containers (backend + cloud-sql-proxy).
How to connect to the web admin portal
:::tip Restrict Admin UI Access
The manifest includes loadBalancerSourceRanges on the UI service to restrict access to specific IP addresses. Before deploying, replace YOUR_IP_ADDRESS/32 with your actual IP (find it at https://ifconfig.me). This prevents unauthorized access to the admin portal from the public internet.
:::
Run the following command to get a list of services:
kubectl get service -n sftpgw
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
sftpgw-backend ClusterIP 34.118.231.200 <none> 8080/TCP,22/TCP 5m
sftpgw-backend-lb LoadBalancer 34.118.226.57 34.139.141.133 22:31327/TCP 5m
sftpgw-ui LoadBalancer 34.118.234.34 34.74.250.188 80:31773/TCP,443:30730/TCP 5m
In the above output, make special note of the EXTERNAL-IP column:
sftpgw-ui: This is the public IP for connecting to the web admin portal. In this example, it’s34.74.250.188sftpgw-backend-lb: This is the public IP for connecting to the SFTP service. In this example, it’s34.139.141.133
Paste the sftpgw-ui EXTERNAL-IP (e.g. 34.74.250.188) into your web browser.
You should see the SFTP Gateway web admin portal.
To configure your deployment, make the following changes:
- Create a web admin user and password
- Log in as this web admin user
- Go to the Settings tab and create a new Cloud Connection to a GCS bucket
- Go to the Folders tab and point the
rootfolder to the Cloud Connection you just created - Go to the Users tab and create a new SFTP user
How to connect to the SFTP service
The SFTP service is hosted on the Java backend.
Using the LoadBalancer service (default)
In the list of services, find sftpgw-backend-lb and look at the EXTERNAL-IP. In this example, it’s 34.139.141.133:
sftp robtest@34.139.141.133
Using the TCP Proxy Load Balancer (multi-replica)
If you set up the TCP Proxy LB (see TCP Proxy Load Balancer Setup), use the static IP you reserved:
# Get your TCP Proxy LB IP
gcloud compute addresses describe sftpgw-sftp-ip --global --project=YOUR_PROJECT_ID --format="value(address)"
# Connect
sftp robtest@<TCP_PROXY_LB_IP>
You should be prompted for a server fingerprint, and then connect via SFTP if your credentials are correct.
Scaling
UI Scaling
The Admin UI can safely scale to multiple replicas without any issues:
kubectl scale deployment/sftpgw-ui -n sftpgw --replicas=2
The UI is stateless - it serves static files and proxies API requests to the backend. As long as SECURITY_CLIENT_ID and SECURITY_CLIENT_SECRET are configured (and match the backend), multiple UI pods work seamlessly.
Backend Scaling
The backend deployment can be scaled, but requires additional configuration for session management:
kubectl scale deployment/sftpgw-backend -n sftpgw --replicas=2
Note: Without sticky sessions or a shared session store, users may experience login issues when requests are load-balanced across multiple backend pods. See the Troubleshooting section below.
Licensing
The built-in license included in the backend image supports up to 5 SFTP users. For production deployments requiring more users, add a license key via environment variable.
Add the LICENSE environment variable to the backend deployment:
- name: LICENSE
value: "your-license-key-here"
Or add it to an existing deployment:
kubectl set env deployment/sftpgw-backend -n sftpgw LICENSE="your-license-key-here"
Contact Thorn Technologies to obtain a license key for additional users.
Client IP Preservation
By default, Kubernetes LoadBalancer services with externalTrafficPolicy: Cluster perform SNAT, replacing client IPs with internal node IPs. This means audit logs show node IPs instead of actual client IPs.
There are two approaches to preserve client IPs:
Option A: externalTrafficPolicy: Local (simple, single-replica)
Client → LoadBalancer → Node with Pod → Pod (direct, client IP preserved)
This is the simplest approach and works well for single-replica deployments. Traffic only goes to nodes that have a backend pod, so no SNAT occurs.
Trade-off: With Local mode, all connections from a single client IP are routed to the same node. This limits horizontal scaling because adding more replicas on different nodes doesn't distribute load from existing clients.
Option B: GCP TCP Proxy Load Balancer (recommended for multi-replica)
For deployments that need both client IP preservation and horizontal scaling, use a GCP TCP Proxy Load Balancer with PROXY Protocol:
Client → GCP TCP Proxy LB (injects PROXY Protocol header) → Pod (any node)
↑
Client IP preserved
via PROXY Protocol
The TCP Proxy LB is a fully managed Google Cloud service that:
- Distributes connections across all pods (true load balancing)
- Injects a PROXY Protocol v1 header containing the real client IP
- Requires no extra proxy pods (unlike HAProxy-based solutions)
SFTP Gateway's SSH library natively parses PROXY Protocol headers, so the real client IP appears in audit logs and is available for IP filtering.
Setup: See the TCP Proxy Load Balancer Setup section below for step-by-step instructions.
Performance Tuning
Resource Recommendations
Choose resources based on your expected throughput:
Light Workloads (<10 TB/day)
resources:
requests:
cpu: "2000m"
memory: "2Gi"
limits:
cpu: "4000m"
memory: "4Gi"
replicas: 1
Expected throughput: ~100-150 MB/s
Medium Workloads (10-50 TB/day)
resources:
requests:
cpu: "4000m"
memory: "4Gi"
limits:
cpu: "8000m"
memory: "8Gi"
env:
- name: JAVA_OPTS
value: "-Xms2g -Xmx6g -XX:+UseG1GC"
replicas: 1
Expected throughput: ~400-500 MB/s
High Workloads (50-150 TB/day)
resources:
requests:
cpu: "4000m"
memory: "4Gi"
limits:
cpu: "8000m"
memory: "8Gi"
env:
- name: JAVA_OPTS
value: "-Xms2g -Xmx6g -XX:+UseG1GC"
replicas: 3
Expected throughput: ~1,500-1,700 MB/s
Horizontal Pod Autoscaler
For automatic scaling based on load:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: sftpgw-backend-hpa
namespace: sftpgw
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: sftpgw-backend
minReplicas: 2
maxReplicas: 8
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
behavior:
scaleDown:
stabilizationWindowSeconds: 300
scaleUp:
stabilizationWindowSeconds: 30
TCP Proxy Load Balancer Setup
If you need both client IP preservation and multi-replica scaling (see Client IP Preservation), follow these steps to set up a GCP TCP Proxy Load Balancer.
Step 1: Add the NEG-backed Service to your manifest
Replace the sftpgw-backend-lb LoadBalancer service in your manifest with a ClusterIP service that has a NEG annotation:
# NEG-backed Service for TCP Proxy Load Balancer
apiVersion: v1
kind: Service
metadata:
name: sftpgw-sftp-neg
namespace: sftpgw
annotations:
cloud.google.com/neg: '{"exposed_ports":{"22":{"name":"sftpgw-sftp-neg"}}}'
spec:
selector:
app: sftpgw-backend
ports:
- name: sftp
port: 22
targetPort: 22
protocol: TCP
type: ClusterIP
Step 2: Enable PROXY Protocol on the backend
Add the following environment variables to the sftpgw-backend container:
- name: FEATURES_SFTP_SUBSYSTEM_LOAD_BALANCER_ENABLE_PROXY_PROTOCOL
value: "true"
- name: FEATURES_SFTP_SUBSYSTEM_LOAD_BALANCER_UNRESTRICTED_MODE
value: "true"
Step 3: Apply the manifest and wait for NEGs
kubectl apply -f kubernetes-manifest.yaml
# Verify NEGs were created (may take a minute)
gcloud compute network-endpoint-groups list --project=YOUR_PROJECT_ID
Step 4: Create the TCP Proxy LB infrastructure
PROJECT=YOUR_PROJECT_ID
# Health check (uses HTTP on port 8080, NOT TCP on port 22)
# Port 22 expects a PROXY Protocol header, so a raw TCP health check would fail
gcloud compute health-checks create tcp sftpgw-sftp-health \
--port=8080 \
--check-interval=10s --timeout=5s \
--healthy-threshold=2 --unhealthy-threshold=3 \
--project=$PROJECT
# Backend service
gcloud compute backend-services create sftpgw-sftp-backend \
--global --protocol=TCP \
--health-checks=sftpgw-sftp-health \
--timeout=3600s \
--connection-draining-timeout=300s \
--project=$PROJECT
# Add NEGs from each zone in your region
for zone in us-central1-a us-central1-b us-central1-c; do
gcloud compute backend-services add-backend sftpgw-sftp-backend \
--global \
--network-endpoint-group=sftpgw-sftp-neg \
--network-endpoint-group-zone=$zone \
--balancing-mode=CONNECTION \
--max-connections-per-endpoint=100 \
--project=$PROJECT
done
# TCP proxy with PROXY_V1 header injection
gcloud compute target-tcp-proxies create sftpgw-sftp-proxy \
--backend-service=sftpgw-sftp-backend \
--proxy-header=PROXY_V1 \
--project=$PROJECT
# Reserve a static IP
gcloud compute addresses create sftpgw-sftp-ip --global --project=$PROJECT
# Get the reserved IP
SFTP_IP=$(gcloud compute addresses describe sftpgw-sftp-ip \
--global --project=$PROJECT --format="value(address)")
echo "SFTP IP: $SFTP_IP"
# Create forwarding rule on port 22
gcloud compute forwarding-rules create sftpgw-sftp-rule \
--global \
--target-tcp-proxy=sftpgw-sftp-proxy \
--address=sftpgw-sftp-ip \
--ports=22 \
--project=$PROJECT
Step 5: Create firewall rules
The TCP Proxy LB requires firewall rules to reach your pods:
# Allow health check probes to port 8080 (actuator endpoint)
gcloud compute firewall-rules create allow-health-check-8080 \
--network=default --action=allow --direction=ingress \
--source-ranges=35.191.0.0/16,130.211.0.0/22 \
--rules=tcp:8080 --project=$PROJECT
# Allow TCP Proxy LB data plane to port 22
gcloud compute firewall-rules create allow-tcp-proxy-sftp \
--network=default --action=allow --direction=ingress \
--source-ranges=130.211.0.0/22,35.191.0.0/16 \
--rules=tcp:22 --project=$PROJECT
Step 6: Verify
# Check backends are healthy
gcloud compute backend-services get-health sftpgw-sftp-backend \
--global --project=$PROJECT
# Test SFTP connection (replace with your SFTP user and key)
sftp -i ~/.ssh/your_key your_user@$SFTP_IP
The backend logs should show the real client IP:
Parsing PROXY protocol string [PROXY TCP4 203.0.113.50 35.190.25.9 35958 22]
Changing remote address to proxy supplied 203.0.113.50:35958
Troubleshooting
Login succeeds but immediately logs out
If you can log in to the web admin portal but are immediately logged out (or see 401 errors), this is typically caused by running multiple backend replicas without a shared JWT signing key.
Cause: If SECURITY_JWT_SECRET is not provided, each backend pod generates its own signing key at startup. When requests are load-balanced across multiple pods, a request may hit a different pod than the one that issued the token, causing validation to fail.
Solution: Ensure SECURITY_JWT_SECRET is set via Kubernetes Secret (as shown in Step 5). When all pods share the same JWT secret, they can validate tokens issued by any other pod.
Note: This issue does not affect SFTP connections - only the admin web portal.
Admin UI shows certificate error
If the browser shows a certificate warning, this is expected when using self-signed certificates. You can:
- Proceed through the warning for testing
- Replace the self-signed certificate with a valid certificate from a Certificate Authority
Cannot connect to SFTP service
- Verify the backend pod is running:
kubectl get pods -n sftpgw - Check the load balancer has an external IP:
kubectl get svc sftpgw-backend-lb -n sftpgw - Ensure your SFTP user has been created in the web admin portal
- Verify the user's SSH key or password is configured correctly
The kubernetes manifest file
# SFTP Home PersistentVolumeClaim
# Note: Namespace was created in Step 5 when creating secrets
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: sftpgw-home-pvc
namespace: sftpgw
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
# Service Account for Workload Identity
apiVersion: v1
kind: ServiceAccount
metadata:
name: sftpgw-sa
namespace: sftpgw
annotations:
# Replace YOUR_PROJECT_ID with your GCP project ID
iam.gke.io/gcp-service-account: sftpgw-cloudsql@YOUR_PROJECT_ID.iam.gserviceaccount.com
---
# SFTP Gateway Backend Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: sftpgw-backend
namespace: sftpgw
spec:
replicas: 1
selector:
matchLabels:
app: sftpgw-backend
template:
metadata:
labels:
app: sftpgw-backend
spec:
serviceAccountName: sftpgw-sa
containers:
# Main SFTP Gateway container
- name: sftpgw-backend
image: thorntech/sftpgateway-backend:latest
env:
- name: SECURITY_CLIENT_ID
valueFrom:
secretKeyRef:
name: sftpgw-secrets
key: SECURITY_CLIENT_ID
- name: SECURITY_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: sftpgw-secrets
key: SECURITY_CLIENT_SECRET
- name: SECURITY_JWT_SECRET
valueFrom:
secretKeyRef:
name: sftpgw-secrets
key: SECURITY_JWT_SECRET
# Cloud SQL Auth Proxy connects on localhost:5432
- name: SPRING_DATASOURCE_URL
value: "jdbc:postgresql://localhost:5432/sftpgw"
- name: SPRING_DATASOURCE_USERNAME
value: "sftpgw"
- name: SPRING_DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: sftpgw-secrets
key: DB_PASSWORD
- name: SPRING_PROFILES_ACTIVE
value: "local"
- name: FEATURES_INSTANCE_CLOUD_PROVIDER
value: "gcp"
ports:
- containerPort: 8080
- containerPort: 22
volumeMounts:
- name: sftpgw-home
mountPath: /home
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "1000m"
livenessProbe:
httpGet:
path: /actuator/health
port: 8080
initialDelaySeconds: 90
periodSeconds: 15
timeoutSeconds: 10
failureThreshold: 4
readinessProbe:
httpGet:
path: /actuator/health
port: 8080
initialDelaySeconds: 45
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
# Cloud SQL Auth Proxy sidecar
- name: cloud-sql-proxy
image: gcr.io/cloud-sql-connectors/cloud-sql-proxy:2.8.0
args:
- "--port=5432"
# Replace with your Cloud SQL connection name
- "YOUR_PROJECT_ID:us-central1:sftpgw-db"
securityContext:
runAsNonRoot: true
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
volumes:
- name: sftpgw-home
persistentVolumeClaim:
claimName: sftpgw-home-pvc
---
# SFTP Gateway Backend Service
apiVersion: v1
kind: Service
metadata:
name: sftpgw-backend
namespace: sftpgw
spec:
selector:
app: sftpgw-backend
ports:
- name: http
port: 8080
targetPort: 8080
- name: sftp
port: 22
targetPort: 22
type: ClusterIP
---
# Note: TLS secret (sftpgw-ui-tls) was created in Step 5
# SFTP Gateway UI Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: sftpgw-ui
namespace: sftpgw
spec:
replicas: 1
selector:
matchLabels:
app: sftpgw-ui
template:
metadata:
labels:
app: sftpgw-ui
spec:
containers:
- name: sftpgw-ui
image: thorntech/sftpgateway-admin-ui:latest
env:
- name: BACKEND_URL
value: "http://sftpgw-backend:8080/"
- name: SECURITY_CLIENT_ID
valueFrom:
secretKeyRef:
name: sftpgw-secrets
key: SECURITY_CLIENT_ID
- name: SECURITY_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: sftpgw-secrets
key: SECURITY_CLIENT_SECRET
- name: FEATURES_INSTANCE_CLOUD_PROVIDER
value: "gcp"
- name: WEBSITE_BUNDLE_CRT
valueFrom:
secretKeyRef:
name: sftpgw-ui-tls
key: tls.crt
- name: WEBSITE_KEY
valueFrom:
secretKeyRef:
name: sftpgw-ui-tls
key: tls.key
ports:
- containerPort: 80
- containerPort: 443
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
periodSeconds: 30
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 10
periodSeconds: 10
restartPolicy: Always
---
# SFTP Gateway UI Service
apiVersion: v1
kind: Service
metadata:
name: sftpgw-ui
namespace: sftpgw
spec:
selector:
app: sftpgw-ui
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
type: LoadBalancer
# Restrict Admin UI access to specific IP addresses
loadBalancerSourceRanges:
- "YOUR_IP_ADDRESS/32" # Replace with your IP (find it at https://ifconfig.me)
---
# SFTP LoadBalancer Service
apiVersion: v1
kind: Service
metadata:
name: sftpgw-backend-lb
namespace: sftpgw
spec:
selector:
app: sftpgw-backend
ports:
- name: sftp
port: 22
targetPort: 22
protocol: TCP
type: LoadBalancer
# Preserve client IP addresses in audit logs
externalTrafficPolicy: Local