Skip to main content

AI Agent Deployment Workflow

This guide walks you through the step-by-step process of deploying an MCP Server or AI Agent on NexaStack.
Follow each section carefully to ensure smooth setup — from configuring the Git repository or uploading a zip file, to defining resource requirements and monitoring deployment status.


Objective

By the end of this guide, you will learn how to:

  • Add and configure an AI Agent in NexaStack.
  • Connect with a Git repository or upload a packaged zip file.
  • Set up resource requirements and deployment preferences.
  • Verify successful deployment status.

Step 1: Login to the Platform

  1. Open the NexaStack login page.
  2. Enter your workspace credentials — email and password.
  3. Click Login to access the platform.
Security Reminder

Always log in via a secure HTTPS connection and never share your credentials publicly.


Step 2: Navigate to the MCP Servers Section

  1. From the left sidebar, go to MarketPlace.
  2. Click on the AI Agents tab — this lists all available MCP Servers and AI Agents.
  3. Ensure the Add New AI Agent button is visible at the top-right.

AI agents Menu


Step 3: Add a New AI Agent

When you click Add New AI Agent, a configuration form appears.
This form collects essential information about the AI Agent and its repository.

Fill in the Form Details

FieldDescription
Agent NameUnique identifier for your AI Agent
Git Repository URLRepository where the agent code resides
GitLab UsernameUsername used for repository access
Access TokenPersonal access token for authentication
BranchThe branch to deploy from (auto-detected)
Repository Requirements
  • The repository must follow a mono-architecture structure (all AI components in one repo).
  • Once you add your Repo URL, Username, and Token, available branches will auto-populate.
  • NexaStack automatically detects the AI Agent structure from the repo to validate deployment readiness.

Add AI Agent Form


Step 4: Upload AI Agent Zip File (Alternative to Git)

If you are not using a Git repository, you can deploy the agent by uploading a packaged zip file.

  1. Click Upload Zip.
  2. Select your AI Agent zip file (e.g., backlogs_planning.zip) from your local system.
  3. Click Upload File to start uploading.
  4. Wait until you see the confirmation message:
    “Agent Zip file uploaded”
  5. Once uploaded, click Proceed Next to continue.

Upload MCP Zip

File Validation

Ensure your zip file includes all essential dependencies, configuration files, and a valid manifest.
Incomplete packages can result in deployment failure.


Step 5: Select the MCP Server

After the upload or repository setup, you’ll need to link your AI Agent with an MCP Server.

Missing MCP Server?

If you don’t have an MCP Server available, follow the step-by-step guide to deploy one:
👉 Deploy MCP Server

  1. From the MCP Server List, select your preferred server (e.g., MCP Odoo).
  2. Click Next to proceed to environment configuration.

Select MCP Server

info

Each AI Agent must be mapped to an MCP Server that manages its lifecycle, resource orchestration, and monitoring.

Step 6: Configure Environment Variables

You can now define environment variables required for your AI Agent deployment.
These variables allow the agent to connect to databases, APIs, and internal services securely.

KeyValue
DATABASE_URLpostgres://username:password@host:port/database
REDIS_HOST127.0.0.1
REDIS_PORT6379
API_KEYyour_api_key_here

Adding Variables

  1. Click Add Variable to create a new key-value pair.
  2. Continue adding variables until all required configurations are completed.
  3. Review each variable for accuracy before proceeding.

Environment Variables Form


Step 7: Define Resource Requirements

Allocate the necessary system resources for your AI Agent to ensure smooth operation.

  1. The Resource Requirements page will appear.
  2. Enter CPU and Memory Requests (minimum resources needed for the agent to start):
ResourceExample ValueUnit
CPU10millicores (m) or cores
Memory128Mi or Gi
  1. Enter CPU and Memory Limits (maximum resources allowed to handle spikes):
ResourceExample ValueUnit
CPU150millicores (m) or cores
Memory512Mi or Gi
  1. Verify all input fields are filled correctly.

Resource Requirements

Resource Planning
  • Requests: Minimum resources needed for the agent to start reliably.
  • Limits: Slightly higher values to handle temporary load spikes without throttling.
  • Plan based on expected workload to optimize performance and cost.

Optional Deployment Features

  • Orchestrator Deployment
    If you want to deploy an orchestrator alongside the agent, enable this toggle and provide a name.

    Naming Convention

    Use lowercase letters with no spaces (e.g., agentorchestrator).

  • Observability / Tracing
    Enable this toggle if you want to monitor traces and metrics for the agent during runtime.

Resource Requirements with Toggles


Step 8: Select Cluster

  1. From the dropdown, select your preferred Cluster where the AI Agent will run.
  2. This determines how and where the agent gets deployed and managed.

Cluster Selection

Missing Cluster?

If you don’t have an available cluster, onboard one based on your workspace preferences.
👉 Onboard a Cluster

Click Deploy Agent to start deployment.

Step 9: Monitor Pipeline Status

After clicking Deploy Agent, NexaStack will initiate the deployment pipeline.

You can monitor real-time status updates:

StatusDescription
PendingDeployment has been triggered and is waiting for resources.
RunningThe agent is currently being deployed.
SuccessDeployment completed successfully.
FailedDeployment failed — check pipeline logs for errors.

pipelineline Success

Monitoring Tip

Use the Pipeline Logs view to track detailed progress.
If a deployment fails, logs help identify issues like misconfiguration or missing dependencies.


Completion Message

Deployment Successful!

You have successfully deployed an AI Agent in NexaStack.
Your deployment is now active and ready for production or further integration.

Deployment Success


Post-Deployment Steps

Once Pipeline completes successfully:

  1. Access your deployed AI Agent from the Dashboard.
  2. Copy the Deployed URL or endpoint provided.
  3. Test agent responses or integrations.
  4. Monitor CPU, memory, and inference metrics from the Observability Dashboard.

Best Practices

  • Validate your package or repository before deploying.
  • Allocate appropriate resources to prevent downtime.
  • Use secure tokens and avoid hardcoded credentials.
  • Monitor deployments regularly to ensure uptime and performance.
  • Test inference endpoints before moving to production.