Quick Start
- Download and install LM Studio
- Load a model in the GUI
- Start the local server: Settings > Local Server > Start
Config
Use it
Auto-Discovery
localhost:1234 for a running LM Studio server.
Provider Details
| Provider ID | lmstudio (custom) |
| Default port | 1234 |
| API type | openai-completions |
| Base URL | http://localhost:1234/v1 |
| API key | Not required |
| Cost | Free (runs locally) |
Notes
- LM Studio’s model IDs may differ from Ollama’s. Check what ID LM Studio reports in its server logs.
- LM Studio has a user-friendly GUI for model management — good for users who prefer not to use the command line.
- Supports both CPU and GPU inference. GPU is strongly recommended for usable performance.