# Usage Guide

### Getting Started

Dotbase is a low-code, agent-as-a-service UI framework that enables the creation and management of sophisticated multi-agent systems. This guide will help you understand how to effectively use the system while respecting its architectural constraints.

### Working with Nodes

### Node Types and Configuration

#### SPARK Node

* Primary data processing node
* Drag from Library Panel to workspace
* Configuration options in Properties Panel:

```json
{
    "analysis_types": ["medical", "technical", "research"],
    "max_retries": 3,
    "timeout_seconds": 30
}
```

#### LUMINA Node

* Receives data from SPARK
* Processes and transforms data
* Configuration settings:

```json
{
    "processing_mode": "async",
    "caching_enabled": true,
    "transformation_rules": [
        "data_normalization",
        "format_conversion"
    ]
}
```

#### BRIDGE Node

* Entry point for system queries
* UI Configuration:

```json
{
    "input_validation": true,
    "query_formatting": true,
    "routing_rules": {
        "medical": "priority_high",
        "technical": "priority_medium",
        "research": "priority_low"
    }
}
```

#### NEXUS Node

* Independent expert system coordinator
* System message handling
* Configuration panel options:

```json
{
    "coordination_mode": "autonomous",
    "decision_threshold": 0.75,
    "expertise_areas": [
        "technical_analysis",
        "risk_assessment",
        "resource_optimization"
    ]
}
```

#### HUB Node

* Central coordination point
* Receives from BRIDGE, LUMINA, and NEXUS
* Configuration settings:

```json
{
    "max_connections": 100,
    "queue_size": 1000,
    "processing_threads": 4
}
```

### Creating Workflows

#### 1. Basic Workflow Setup

1. Drag required nodes from Library Panel
2. Connect nodes following rules:
   * SPARK → LUMINA
   * LUMINA → HUB
   * BRIDGE → HUB
   * NEXUS → HUB

#### 2. Node Connection Rules

```mermaid
graph LR
    S[SPARK] -->|Only| L[LUMINA]
    L -->|After SPARK| H[HUB]
    B[BRIDGE] -->|Only| H
    N[NEXUS] -->|Only| H
```

### Configuring Inputs

#### SPARK Node Inputs

Using the Properties Panel:

1. **Medical Analysis**

```json
{
    "query_type": "medical",
    "data": {
        "symptoms": {
            "fever": 8,
            "fatigue": 6
        },
        "patient_info": {
            "age": 45,
            "history": ["diabetes"]
        }
    }
}
```

2. **Technical Analysis**

```json
{
    "query_type": "technical",
    "data": {
        "requirements": [
            "Cloud infrastructure",
            "Real-time processing"
        ],
        "constraints": [
            "Budget limitation",
            "Time constraint"
        ]
    }
}
```

3. **Research Analysis**

```json
{
    "query_type": "research",
    "data": {
        "citation_count": 45,
        "journal_impact": 2.5,
        "publication_year": 2023
    }
}
```

#### BRIDGE Node Inputs

In the node configuration panel:

1. Set query parameters:

```json
{
    "query_description": "Analyze telemedicine platform implementation",
    "priority": "high",
    "required_analysis": [
        "technical_feasibility",
        "compliance_check"
    ]
}
```

### Using the Testing Panel

#### 1. Workflow Testing

1. Click "New File" in the top menu
2. Select input agents and drag into workspace
3. Enter test data
4. Click "Run Test" to execute

#### 2. Monitoring Results

* View real-time node status in Monitor Panel
* Check data flow visualization
* Access detailed logs in Log Panel

#### Query Types and Usage

1. **Medical Analysis**

```python
data = {
    "symptoms": {
        "fever": 8,
        "fatigue": 6
    }
}
results = expert_analysis("medical", data)
```

2. **Technical Analysis**

```python
data = {
    "requirements": ["API", "Database", "UI"],
    "constraints": ["Time", "Budget"]
}
results = expert_analysis("technical", data)
```

3. **Research Analysis**

```python
data = {
    "citation_count": 45,
    "journal_impact": 2.5,
    "publication_year": 2023
}
results = expert_analysis("research", data)
```

### Best Practices

#### Data Flow Management

1. **SPARK to LUMINA**
   * Ensure data is properly formatted
   * Include all required fields
   * Validate before transmission
2. **LUMINA to HUB**
   * Transform data as needed
   * Add metadata if required
   * Verify data integrity
3. **BRIDGE to HUB**
   * Format initial queries properly
   * Include necessary context
   * Set appropriate priority levels

#### Error Handling

```python
try:
    results = expert_analysis(query_type, data)
except Exception as e:
    error_response = {
        "error": str(e),
        "status": "failed",
        "query_type": query_type
    }
    # Handle error appropriately
```

### Common Use Cases

1. **Medical Data Processing**

```python
medical_data = {
    "symptoms": {
        "pain": 7,
        "nausea": 4,
        "dizziness": 3
    }
}
analysis = expert_analysis("medical", medical_data)
```

2. **Technical Feasibility**

```python
technical_data = {
    "requirements": [
        "Cloud hosting",
        "Real-time processing",
        "Data encryption"
    ],
    "constraints": [
        "Budget limitations",
        "Time constraints"
    ]
}
assessment = expert_analysis("technical", technical_data)
```

### Troubleshooting

#### Common Issues and Solutions

1. **Connection Errors**
   * Verify node connection rules
   * Check data format compatibility
   * Ensure proper authentication
2. **Processing Errors**
   * Validate input data structure
   * Check for missing required fields
   * Verify query type is supported
3. **System Response Issues**
   * Monitor system resources
   * Check for rate limiting
   * Verify API key validity
