Dependencies & Setup¶
System Requirements¶
- Python: 3.11+
- Ollama: Latest version for local inference
- Storage: ~10 GB for models
- RAM: 8 GB minimum, 16 GB recommended
Installation¶
# 1. Clone repository
git clone <repo-url>
cd Local-File-Organizer
# 2. Install Ollama and pull models
ollama pull qwen2.5:3b-instruct-q4_K_M # Text: ~1.9 GB
ollama pull qwen2.5vl:7b-q4_K_M # Vision: ~6.0 GB
# 3. Create virtual environment
python3 -m venv venv
source venv/bin/activate
# 4. Install package
pip install -e .
# 5. Verify
file-organizer --version
fo --version
Optional Dependencies¶
pip install -e ".[audio]" # Audio transcription (faster-whisper, torch)
pip install -e ".[video]" # Video processing (opencv, scenedetect)
pip install -e ".[dedup]" # Image deduplication (imagededup)
pip install -e ".[archive]" # Archive support (7z, RAR)
pip install -e ".[scientific]" # Scientific formats (HDF5, NetCDF, MATLAB)
pip install -e ".[cloud]" # OpenAI-compatible API providers (openai)
pip install -e ".[cad]" # CAD formats (ezdxf)
pip install -e ".[desktop]" # Native desktop window (pywebview + uvicorn)
pip install -e ".[build]" # Executable packaging (PyInstaller)
pip install -e ".[llama]" # llama.cpp provider — direct GGUF inference (no Ollama server)
pip install -e ".[mlx]" # Apple Silicon MLX provider (Darwin only)
pip install -e ".[claude]" # Anthropic Claude API provider (Claude 3.x text and vision)
pip install -e ".[search]" # Hybrid BM25 + vector search (rank-bm25, scikit-learn)
pip install -e ".[all]" # Everything
Note: Additional extras (
gui,docs,dev,web,parsers,cloud) are available inpyproject.tomlfor GUI support, documentation building, development tooling, and cloud/OpenAI-compatible provider support.
CLI Entrypoints¶
# pyproject.toml
[project.scripts]
file-organizer = "file_organizer.cli:main"
fo = "file_organizer.cli:main"