DotBase
  • About Dotbase
  • Getting Started with Dotbase
  • Agents & Tools
  • Usage Guide
  • IDE Environment
  • Deployment Guide
  • Examples
    • Web Traffic Analysis Example
    • Security Audit Example
    • Code Debug Assistant
    • Data Analysis Pipeline
    • Agent Configuration
    • Tokenomics
Powered by GitBook
On this page
  • Getting Started
  • Working with Nodes
  • Node Types and Configuration
  • Creating Workflows
  • Configuring Inputs
  • Using the Testing Panel
  • Best Practices
  • Common Use Cases
  • Troubleshooting

Usage Guide

Getting Started

Dotbase is a low-code, agent-as-a-service UI framework that enables the creation and management of sophisticated multi-agent systems. This guide will help you understand how to effectively use the system while respecting its architectural constraints.

Working with Nodes

Node Types and Configuration

SPARK Node

  • Primary data processing node

  • Drag from Library Panel to workspace

  • Configuration options in Properties Panel:

{
    "analysis_types": ["medical", "technical", "research"],
    "max_retries": 3,
    "timeout_seconds": 30
}

LUMINA Node

  • Receives data from SPARK

  • Processes and transforms data

  • Configuration settings:

{
    "processing_mode": "async",
    "caching_enabled": true,
    "transformation_rules": [
        "data_normalization",
        "format_conversion"
    ]
}

BRIDGE Node

  • Entry point for system queries

  • UI Configuration:

{
    "input_validation": true,
    "query_formatting": true,
    "routing_rules": {
        "medical": "priority_high",
        "technical": "priority_medium",
        "research": "priority_low"
    }
}

NEXUS Node

  • Independent expert system coordinator

  • System message handling

  • Configuration panel options:

{
    "coordination_mode": "autonomous",
    "decision_threshold": 0.75,
    "expertise_areas": [
        "technical_analysis",
        "risk_assessment",
        "resource_optimization"
    ]
}

HUB Node

  • Central coordination point

  • Receives from BRIDGE, LUMINA, and NEXUS

  • Configuration settings:

{
    "max_connections": 100,
    "queue_size": 1000,
    "processing_threads": 4
}

Creating Workflows

1. Basic Workflow Setup

  1. Drag required nodes from Library Panel

  2. Connect nodes following rules:

    • SPARK → LUMINA

    • LUMINA → HUB

    • BRIDGE → HUB

    • NEXUS → HUB

2. Node Connection Rules

graph LR
    S[SPARK] -->|Only| L[LUMINA]
    L -->|After SPARK| H[HUB]
    B[BRIDGE] -->|Only| H
    N[NEXUS] -->|Only| H

Configuring Inputs

SPARK Node Inputs

Using the Properties Panel:

  1. Medical Analysis

{
    "query_type": "medical",
    "data": {
        "symptoms": {
            "fever": 8,
            "fatigue": 6
        },
        "patient_info": {
            "age": 45,
            "history": ["diabetes"]
        }
    }
}
  1. Technical Analysis

{
    "query_type": "technical",
    "data": {
        "requirements": [
            "Cloud infrastructure",
            "Real-time processing"
        ],
        "constraints": [
            "Budget limitation",
            "Time constraint"
        ]
    }
}
  1. Research Analysis

{
    "query_type": "research",
    "data": {
        "citation_count": 45,
        "journal_impact": 2.5,
        "publication_year": 2023
    }
}

BRIDGE Node Inputs

In the node configuration panel:

  1. Set query parameters:

{
    "query_description": "Analyze telemedicine platform implementation",
    "priority": "high",
    "required_analysis": [
        "technical_feasibility",
        "compliance_check"
    ]
}

Using the Testing Panel

1. Workflow Testing

  1. Click "New File" in the top menu

  2. Select input agents and drag into workspace

  3. Enter test data

  4. Click "Run Test" to execute

2. Monitoring Results

  • View real-time node status in Monitor Panel

  • Check data flow visualization

  • Access detailed logs in Log Panel

Query Types and Usage

  1. Medical Analysis

data = {
    "symptoms": {
        "fever": 8,
        "fatigue": 6
    }
}
results = expert_analysis("medical", data)
  1. Technical Analysis

data = {
    "requirements": ["API", "Database", "UI"],
    "constraints": ["Time", "Budget"]
}
results = expert_analysis("technical", data)
  1. Research Analysis

data = {
    "citation_count": 45,
    "journal_impact": 2.5,
    "publication_year": 2023
}
results = expert_analysis("research", data)

Best Practices

Data Flow Management

  1. SPARK to LUMINA

    • Ensure data is properly formatted

    • Include all required fields

    • Validate before transmission

  2. LUMINA to HUB

    • Transform data as needed

    • Add metadata if required

    • Verify data integrity

  3. BRIDGE to HUB

    • Format initial queries properly

    • Include necessary context

    • Set appropriate priority levels

Error Handling

try:
    results = expert_analysis(query_type, data)
except Exception as e:
    error_response = {
        "error": str(e),
        "status": "failed",
        "query_type": query_type
    }
    # Handle error appropriately

Common Use Cases

  1. Medical Data Processing

medical_data = {
    "symptoms": {
        "pain": 7,
        "nausea": 4,
        "dizziness": 3
    }
}
analysis = expert_analysis("medical", medical_data)
  1. Technical Feasibility

technical_data = {
    "requirements": [
        "Cloud hosting",
        "Real-time processing",
        "Data encryption"
    ],
    "constraints": [
        "Budget limitations",
        "Time constraints"
    ]
}
assessment = expert_analysis("technical", technical_data)

Troubleshooting

Common Issues and Solutions

  1. Connection Errors

    • Verify node connection rules

    • Check data format compatibility

    • Ensure proper authentication

  2. Processing Errors

    • Validate input data structure

    • Check for missing required fields

    • Verify query type is supported

  3. System Response Issues

    • Monitor system resources

    • Check for rate limiting

    • Verify API key validity

PreviousAgents & ToolsNextIDE Environment

Last updated 4 months ago

Page cover image