πŸš€ ALSM Server

Abstract Language Semantic Model - API Documentation & Tools

πŸ–₯️ Interactive Tools

ALSM Convert
/alsm-convert

Convert JS/TS to PlantUML diagrams. Supports multiple files, HTML/Vue extraction.

AI Documentation
/ai-doc

Generate AI documentation with PlantUML diagrams. Multiple AI providers supported.

PlantUML Repair
/repair-plantuml

Auto-detect and repair PlantUML syntax errors in markdown using AI.

πŸ”Œ API Endpoints

GET /api/ai-config UPDATED

Get configuration for all supported AI services and models.

?showAll=true  - Shows all services and models regardless of publish status
                 Use for first-party clients (alsm-server-bun HTML tools)
                 Omit for third-party clients (gituml) to respect publish fields
{
  "version": "1.0.0",
  "updated_at": "2025-11-16T12:00:00.000Z",
  "services": [ ... array of service objects ... ],
  "default_service": "gituml-ai",
  "recommended_services": ["gituml-ai", "openai", "anthropic", "gemini"]
}
{
  "id": "openai",
  "display_name": "OpenAI",
  "description": "OpenAI's GPT models",
  "publish": true,
  "url": "https://api.openai.com/v1/chat/completions",
  "doco_url": "https://platform.openai.com/docs/models",
  "format": "openai",
  "requiresAuth": true,
  "authHeaderName": "Authorization",
  "default_model": "gpt-4o-mini",
  "available_models": [
    {
      "id": "gpt-4o-mini",
      "name": "GPT-4o Mini",
      "description": "Affordable and intelligent small model",
      "context_window": 128000,
      "supports_streaming": true,
      "publish": true,
      "cost_per_million_input_tokens": 0.15,
      "cost_per_million_output_tokens": 0.60
    },
    {
      "id": "gpt-4o",
      "name": "GPT-4o",
      "description": "High-intelligence flagship model",
      "context_window": 128000,
      "supports_streaming": true,
      "publish": true,
      "cost_per_million_input_tokens": 2.50,
      "cost_per_million_output_tokens": 10.00
    }
  ],
  "supports_streaming": true,
  "supports_function_calling": true,
  "is_local": false,
  "status": "active",
  "api_key_env_var": "OPENAI_API_KEY",
  "priority": 2
}
Service-level fields:
β€’ id                                 - Unique service identifier
β€’ display_name                       - Human-readable service name
β€’ publish                            - Available to third-party clients (defaults to true)
β€’ url                                - API endpoint URL
β€’ format                             - API format: "openai", "ollama", "gemini", "anthropic"
β€’ requiresAuth                       - Whether service requires API key
β€’ default_model                      - Default model ID for this service
β€’ available_models                   - Array of model objects (see below)
β€’ status                             - "active", "beta", or "deprecated"
β€’ priority                           - Lower number = higher priority in UI (optional)

Model-level fields:
β€’ id                                 - Unique model identifier
β€’ name                               - Human-readable model name
β€’ description                        - Model description and capabilities
β€’ context_window                     - Maximum context length in tokens
β€’ supports_streaming                 - Whether model supports streaming responses
β€’ publish                            - Available to third-party clients (defaults to true)
β€’ cost_per_million_input_tokens      - Cost per 1M input tokens (USD)
β€’ cost_per_million_output_tokens     - Cost per 1M output tokens (USD)
β€’ provider                           - For gituml-ai: which provider to route to
β€’ provider_model_id                  - For gituml-ai: actual model ID at provider
# Third-party clients (only published services/models)
curl http://localhost:3000/api/ai-config

# First-party clients (all services/models, including unpublished)
curl http://localhost:3000/api/ai-config?showAll=true

# Pretty-print with jq
curl http://localhost:3000/api/ai-config | jq '.'

# List all service IDs
curl http://localhost:3000/api/ai-config | jq '.services[].id'

# Get models for a specific service
curl http://localhost:3000/api/ai-config | jq '.services[] | select(.id=="openai") | .available_models'
POST /api/convert RENAMED

Convert source code to ALSM and PlantUML diagrams.

{
  "sources": [
    { "filename": "example.ts", "content": "const x = 1;" }
  ],
  "showFiles": true,
  "skipUnnamed": false,
  "hideEmptyFiles": false,
  "showLineLabels": false
}
{
  "debug": "ALSM representation...",
  "plantuml": "@startuml\n...\n@enduml"
}
curl -X POST http://localhost:3000/api/convert \
  -H "Content-Type: application/json" \
  -d '{"sources": [{"filename": "test.ts", "content": "class Car {}"}]}'
POST /api/ai-doc

Generate AI documentation from source code.

Generates AI-powered documentation for provided source code using the specified AI service and model. Supports optional PlantUML diagram generation and repair.

  • API endpoints return raw markdown (PlantUML blocks as text)
  • PlantUML server is used only for validation/repair, not for rendering in API responses
  • Clients (like GitUML) are responsible for converting PlantUML markdown to image URLs
  • GitUML uses its own PlantUML server configuration (independent of this server's PLANTUML_BROWSER_URL, PLANTUML_SERVER_URL settings)
?stripThinking=false
                 Use for first-party clients to indicate whether AI responses should retain
                 <think>...</think> blocks. Defaults to true (stripped). Third-party clients
                 should omit or set to true to remove thinking blocks from AI responses.
{
  "activeService": "openai",
  "model": "gpt-4o-mini",
  "apiKey": "sk-...",
  "systemPrompt": "You are a world-class code documenter.",
  "userPrompt": "Explain this code.",
  "sourceCode": "const x = 1;",
  "files": [{"path": "file.ts", "content": "..."}],
  "repairPlantUml": true
}
{
  "serviceName": "openai",
  "modelName": "gpt-4o-mini",
  "systemPrompt": "You are a world-class code documenter.",
  "userPrompt": "Explain this code.",
  "reply": "AI-generated documentation...",
  "usage": {
    "prompt_tokens": 50,
    "completion_tokens": 100,
    "total_tokens": 150
  },
  "firstPlantUml": "@startuml\n...\n@enduml",
  "plantUmlRepairInfo": {
    "numDiagrams": 2,
    "numErrors": 1,
    "numRepaired": 1
  }
}
{
  "serviceName": "openai",
  "modelName": "gpt-4o",
  "systemPrompt": "You are a world-class code documenter.",
  "userPrompt": "Explain this code.",
  "reply": "",
  "error": "Request failed with status code 401",
  "uiMessages": "Error from openai [HTTP 401]: Incorrect API key...",
  "apiResponseMasked": "{ error: {...} }"
}
β€’ serviceName        - Which AI service was used
β€’ modelName          - Which model was used
β€’ systemPrompt       - Echo of system prompt
β€’ userPrompt         - Echo of user prompt
β€’ reply              - AI-generated response (markdown), empty string on error
β€’ error              - Error message (only present if request failed)
β€’ uiMessages         - Detailed error/warning message (string, only when applicable)
β€’ apiResponseMasked  - Masked API response for debugging (only on errors)
β€’ usage              - Token usage statistics (when available)
  - prompt_tokens    - Tokens in the input
  - completion_tokens- Tokens in the output
  - total_tokens     - Total tokens used (for billing)
β€’ firstPlantUml      - First PlantUML diagram extracted (if any)
β€’ plantUmlRepairInfo - PlantUML repair stats (when repairPlantUml: true)
  - numDiagrams      - Total diagrams found
  - numErrors        - Diagrams with syntax errors
  - numRepaired      - Successfully repaired diagrams
curl -X POST http://localhost:3000/api/ai-doc \
  -H "Content-Type: application/json" \
  -d '{"activeService": "openai", "model": "gpt-4o-mini", "apiKey": "sk-...", "userPrompt": "Explain this", "sourceCode": "const x = 1;"}'
POST /api/repair-plantuml UPDATED

Repair PlantUML syntax errors in markdown using AI.

{
  "markdown": "# Doc\n```plantuml\n@startuml\nclass Car\n@enduml\n```",
  "activeService": "openai",
  "model": "gpt-4o-mini",
  "apiKey": "sk-..."
}
interface RepairResult:
{
  originalMarkdown: string;
  repairedMarkdown: string;
  numDiagrams: number;
  numErrors: number;
  numRepaired: number;
  repairs: Array<{
    index: number;
    originalSource: string;
    repairedSource: string;
    error: string;
    syntaxErrorLine?: number;
    aiService?: string;
    aiModel?: string;
    aiCallMade?: boolean; // True if AI service was invoked, false for built-in repairs
    repairStatus?: 'success' | 'failed' | 'incomplete' | 'no-error'; // Status of repair attempt
    repairDurationSeconds?: number; // Time taken to repair in seconds
    usage?: { // Token usage for this specific repair (only present if aiCallMade is true)
      prompt_tokens?: number;
      completion_tokens?: number;
      total_tokens?: number;
    };
  }>;
  usage?: {
    prompt_tokens: number;
    completion_tokens: number;
    total_tokens: number;
  };
  debugInfo?: {
    preprocessingSteps: Array<{
      step: string;
      applied: boolean;
      description: string;
    }>;
  };
}
            
{
  "repairedMarkdown": "Below is a self-contained walk-through of the **AUTOMATIC1111 Stable-Diffusion WebUI** β€œscripts” sub... (10606 chars total)",
  "statistics": {
    "numDiagrams": 6,
    "numErrors": 2,
    "numRepaired": 2,
    "repairs": [
      {
        "index": 1,
        "error": "",
        "aiCallMade": false,
        "repairStatus": "no-error"
      },
      {
        "index": 2,
        "error": "participant Gradio\nSyntax Error? (Assumed diagram type: class)",
        "syntaxErrorLine": 4,
        "aiCallMade": false,
        "repairStatus": "success"
      },
      {
        "index": 3,
        "error": "",
        "aiCallMade": false,
        "repairStatus": "no-error"
      },
      {
        "index": 4,
        "error": "fredjunk\nSyntax Error? (Assumed diagram type: activity)",
        "syntaxErrorLine": 4,
        "aiService": "groq",
        "aiModel": "llama-3.3-70b-versatile",
        "aiCallMade": true,
        "repairStatus": "success",
        "repairDurationSeconds": 0.43,
        "usage": {
          "prompt_tokens": 572,
          "completion_tokens": 176,
          "total_tokens": 748
        }
      },
      {
        "index": 5,
        "error": "",
        "aiCallMade": false,
        "repairStatus": "no-error"
      },
      {
        "index": 6,
        "error": "",
        "aiCallMade": false,
        "repairStatus": "no-error"
      }
    ],
    "debugInfo": {
      "preprocessingSteps": [
        {
          "step": "comment-stripping",
          "applied": true,
          "description": "Removed trailing comments (PlantUML doesn't support comments after statements)"
        },
        {
          "step": "allowmixing-add",
          "applied": true,
          "description": "Added 'allowmixing' directive for mixed UML element types (e.g., class + state)"
        },
        {
          "step": "allowmixing-remove",
          "applied": true,
          "description": "Removed 'allowmixing' from pure sequence diagrams (breaks rendering)"
        }
      ]
    }
  },
  "usage": {
    "prompt_tokens": 572,
    "completion_tokens": 176,
    "total_tokens": 748
  }
}
            
Per-repair fields:
β€’ index                 - Diagram number (1, 2, 3...)
β€’ error                 - Error message (empty if no error)
β€’ syntaxErrorLine       - Line number where error occurred
β€’ aiService             - AI service used (e.g., "openai", "groq")
β€’ aiModel               - Model used (e.g., "gpt-4o-mini")
β€’ aiCallMade            - Boolean: true if AI was invoked, false otherwise
β€’ repairStatus          - Status of repair (see below)
β€’ repairDurationSeconds - Time taken to repair
β€’ usage                 - Token usage for this specific repair (when aiCallMade is true)
  - prompt_tokens       - Input tokens for this repair
  - completion_tokens   - Output tokens for this repair
  - total_tokens        - Total tokens for this repair
β€’ "success"     - Diagram had error, AI successfully repaired it
β€’ "no-error"    - Diagram was already valid, no repair needed
β€’ "incomplete"  - AI attempted repair but result still has errors
β€’ "failed"      - AI repair attempt failed completely
Top-level usage field:
β€’ Contains aggregated token usage from ALL AI repair calls
β€’ Sum of all individual repair.usage values
β€’ Only present when AI repairs were attempted (one or more aiCallMade: true)

Per-repair usage field:
β€’ Contains token usage for THAT SPECIFIC repair
β€’ Only present when aiCallMade is true for that repair
β€’ Useful for tracking cost per diagram repair
curl -X POST http://localhost:3000/api/repair-plantuml \
  -H "Content-Type: application/json" \
  -d '{"markdown": "```plantuml\n@startuml\nclass Car\n@enduml\n```", "activeService": "openai"}'

ALSM Server v1.0 | Powered by Bun πŸ₯Ÿ