π Hugging Face Spaces Deployment Guide
Quick Deploy to HF Spaces (5 minutes)
Step 1: Prepare Your Repository
Your repository should have these files in the root:
- β
app.py
- Streamlit application - β
requirements.txt
- Minimal dependencies (streamlit, requests, numpy) - β
README.md
- With HF Spaces config at the top
Step 2: Create HF Space
- Go to huggingface.co/spaces
- Click "Create new Space"
- Fill in the details:
- Owner:
ArchCoder
- Space name:
federated-credit-scoring
- Short description:
Federated Learning Credit Scoring Demo with Privacy-Preserving Model Training
- License:
MIT
- Space SDK:
Streamlit
β οΈ NOT Docker - Space hardware:
Free
- Visibility:
Public
- Owner:
Step 3: Upload Files
Option A: Direct Upload
- Click "Create Space"
- Upload these files:
app.py
requirements.txt
Option B: Connect GitHub (Recommended)
- In Space Settings β "Repository"
- Connect your GitHub repo
- Enable "Auto-deploy on push"
Step 4: Wait for Build
- HF Spaces will install dependencies
- Build your Streamlit app
- Takes 2-3 minutes
Step 5: Access Your App
Your app will be live at:
https://huggingface.co/spaces/ArchCoder/federated-credit-scoring
π― What Users Will See
- Demo Mode: Works immediately (no server needed)
- Interactive Interface: Enter features, get predictions
- Educational Content: Learn about federated learning
- Professional UI: Clean, modern design
π§ Troubleshooting
"Missing app file" error:
- Ensure
app.py
is in the root directory - Check that SDK is set to
streamlit
(not docker)
Build fails:
- Check
requirements.txt
has minimal dependencies - Ensure no heavy packages (tensorflow, etc.) in requirements.txt
App doesn't load:
- Check logs in HF Spaces
- Verify app.py has no syntax errors
π Required Files
app.py
(root level):
import streamlit as st
import requests
import numpy as np
import time
st.set_page_config(page_title="Federated Credit Scoring Demo", layout="centered")
# ... rest of your app code
requirements.txt
(root level):
streamlit
requests
numpy
README.md
(with HF config at top):
```yaml
title: Federated Credit Scoring emoji: π colorFrom: red colorTo: red sdk: streamlit app_port: 8501 tags: - streamlit - federated-learning - machine-learning - privacy pinned: false short_description: Federated Learning Credit Scoring Demo with Privacy-Preserving Model Training license: mit
## π Success!
After deployment, you'll have:
- β
Live web app accessible to anyone
- β
No server setup required
- β
Professional presentation of your project
- β
Educational value for visitors
**Your federated learning demo will be live and working!** π
# FinFedRAG Deployment Guide
## Overview
This project implements a federated learning framework with RAG capabilities for financial data. The system can be deployed using Docker Compose for local development or Kubernetes for production environments.
## Debugging and Monitoring
### Enhanced Debugging Features
The web application now includes comprehensive debugging capabilities:
1. **Debug Information Panel**: Located in the sidebar, shows:
- Real-time server health status
- Recent debug messages and logs
- Connection error details
- Client simulator status
2. **Detailed Error Logging**: All operations are logged with:
- Connection attempts and failures
- Server response details
- Timeout and network error handling
- Client registration and training status updates
3. **Real-time Status Monitoring**:
- Server health checks
- Training progress tracking
- Client connection status
- Error message history
### Using the Debug Features
1. **Enable Debug Mode**: Uncheck "Demo Mode" in the sidebar
2. **View Debug Information**: Expand the "Debug Information" section in the sidebar
3. **Monitor Logs**: Check the "Recent Logs" section for real-time updates
4. **Clear Logs**: Use the "Clear Debug Logs" button to reset the log history
## Local Development Setup
### Prerequisites
- Python 3.8+
- Docker and Docker Compose
- Required Python packages (see requirements.txt)
### Quick Start
1. **Clone and Setup**:
```bash
git clone <repository-url>
cd FinFedRAG-Financial-Federated-RAG
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
Start the Federated Server:
python src/main.py --mode server
Start Multiple Clients (in separate terminals):
python src/main.py --mode client --client-id client1 python src/main.py --mode client --client-id client2 python src/main.py --mode client --client-id client3
Run the Web Application:
streamlit run app.py
Docker Compose Deployment
For containerized deployment:
cd docker
docker-compose up --build
This will start:
- 1 federated server on port 8000
- 3 federated clients
- All services connected via Docker network
Kubernetes Deployment
Architecture Overview
The Kubernetes setup provides a production-ready deployment with:
- Server Deployment: Single federated learning server
- Client Deployment: Multiple federated learning clients (3 replicas)
- Service Layer: Internal service discovery
- ConfigMaps: Configuration management
- Namespace Isolation: Dedicated
federated-learning
namespace
Components
1. Server Deployment (kubernetes/deployments/server.yaml
)
- Replicas: 1 (single server instance)
- Port: 8000 (internal)
- Config: Mounted from ConfigMap
- Image: fl-server:latest
2. Client Deployment (kubernetes/deployments/client.yaml
)
- Replicas: 3 (multiple client instances)
- Environment: SERVER_HOST=fl-server-service
- Config: Mounted from ConfigMap
- Image: fl-client:latest
3. Service (kubernetes/services/service.yaml
)
- Type: ClusterIP (internal communication)
- Port: 8000
- Selector: app=fl-server
Deployment Steps
Build Docker Images:
docker build -f docker/Dockerfile.server -t fl-server:latest . docker build -f docker/Dockerfile.client -t fl-client:latest .
Create Namespace:
kubectl create namespace federated-learning
Create ConfigMaps:
kubectl create configmap server-config --from-file=config/server_config.yaml -n federated-learning kubectl create configmap client-config --from-file=config/client_config.yaml -n federated-learning
Deploy Services:
kubectl apply -f kubernetes/services/service.yaml kubectl apply -f kubernetes/deployments/server.yaml kubectl apply -f kubernetes/deployments/client.yaml
Verify Deployment:
kubectl get pods -n federated-learning kubectl get services -n federated-learning
Accessing the Application
Option 1: Port Forwarding
kubectl port-forward service/fl-server-service 8080:8000 -n federated-learning
Option 2: Load Balancer
Modify the service to use LoadBalancer type:
apiVersion: v1
kind: Service
metadata:
name: fl-server-service
namespace: federated-learning
spec:
type: LoadBalancer # Changed from ClusterIP
selector:
app: fl-server
ports:
- port: 8080
targetPort: 8000
Option 3: Ingress Controller
Create an ingress resource for external access:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: fl-ingress
namespace: federated-learning
spec:
rules:
- host: fl.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: fl-server-service
port:
number: 8000
Monitoring and Debugging in Kubernetes
View Pod Logs:
kubectl logs -f deployment/fl-server -n federated-learning kubectl logs -f deployment/fl-client -n federated-learning
Check Pod Status:
kubectl describe pods -n federated-learning
Access Pod Shell:
kubectl exec -it <pod-name> -n federated-learning -- /bin/bash
Monitor Resource Usage:
kubectl top pods -n federated-learning
Troubleshooting
Common Issues
Connection Refused Errors:
- Check if server is running:
kubectl get pods -n federated-learning
- Verify service exists:
kubectl get services -n federated-learning
- Check pod logs for startup errors
- Check if server is running:
Client Registration Failures:
- Ensure server is healthy before starting clients
- Check network connectivity between pods
- Verify ConfigMap configurations
Training Status Issues:
- Monitor server logs for aggregation errors
- Check client participation in training rounds
- Verify model update sharing
Debug Commands
# Check all resources in namespace
kubectl get all -n federated-learning
# View detailed pod information
kubectl describe pod <pod-name> -n federated-learning
# Check service endpoints
kubectl get endpoints -n federated-learning
# View ConfigMap contents
kubectl get configmap server-config -n federated-learning -o yaml
Production Considerations
- Resource Limits: Add resource requests and limits to deployments
- Health Checks: Implement liveness and readiness probes
- Secrets Management: Use Kubernetes secrets for sensitive data
- Persistent Storage: Add persistent volumes for model storage
- Monitoring: Integrate with Prometheus/Grafana for metrics
- Logging: Use centralized logging (ELK stack, Fluentd)
Scaling
Horizontal Pod Autoscaling
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: fl-client-hpa
namespace: federated-learning
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: fl-client
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
This deployment guide provides comprehensive information for both local development and production Kubernetes deployment, with enhanced debugging capabilities for better monitoring and troubleshooting.