Fortytwo is a decentralized AI protocol on Monad that leverages idle consumer hardware for swarm inference. It enables Small Language Models to achieve advanced multi-step reasoning at lower costs, surpassing the performance and scalability of leading models.
Responsibilities:
-
Deploy scalable, production-ready ML services with optimized infrastructure and auto-scaling Kubernetes clusters.
-
Optimize GPU resources using MIG (Multi-Instance GPU) and NOS (Node Offloading System).
-
Manage cloud storage (e.g., S3) to ensure high availability and performance.
-
Integrate state-of-the-art ML techniques, such as LoRA and model merging, into workflows:
-
Work with SOTA ML codebases and adapt them to organizational needs.
-
Integrate LoRA (Low-Rank Adaptation) techniques and model merging workflows.
-
Deploy and manage large language models (LLM), small language models (SLM), and large multimodal models (LMM).
-
Serve ML models using technologies like Triton Inference Server.
-
Leverage solutions such as vLLM, TGI (Text Generation Inference), and other state-of-the-art serving frameworks.
-
Optimize models with ONNX and TensorRT for efficient deployment.
-
-
Develop Retrieval-Augmented Generation (RAG) systems integrating spreadsheet, math, and compiler processors.
-
Set up monitoring and logging solutions using Grafana, Prometheus, Loki, Elasticsearch, and OpenSearch.
-
Write and maintain CI/CD pipelines using GitHub Actions for seamless deployment processes.
-
Create Helm templates for rapid Kubernetes node deployment.
-
Automate workflows using cron jobs and Airflow DAGs.
Requirements:
-
Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
-
Proficiency in Kubernetes, Helm, and containerization technologies.
-
Experience with GPU optimization (MIG, NOS) and cloud platforms (AWS, GCP, Azure).
-
Strong knowledge of monitoring tools (Grafana, Prometheus) and scripting languages (Python, Bash).
-
Hands-on experience with CI/CD tools and workflow management systems.
-
Familiarity with Triton Inference Server, ONNX, and TensorRT for model serving and optimization.
Preferred:
-
5+ years of experience in MLOps or ML engineering roles.
-
Experience with advanced ML techniques, such as multi-sampling and dynamic temperatures.
-
Knowledge of distributed training and large model fine-tuning.
-
Proficiency in Go or Rust programming languages.
-
Experience designing and implementing highly secure MLOps pipelines, including secure model deployment and data encryption.
Why Work with Us:
At Fortytwo, we are building a research-driven, decentralized AI infrastructure that prioritizes scalability, efficiency, and sustainability. Our approach moves beyond centralized AI constraints, applying globally scalable swarm intelligence to enhance LLM reasoning and problem-solving capabilities.
-
Engage in meaningful AI research – Work on decentralized inference, multi-agent systems, and efficient model deployment with a team that values rigorous, first-principles thinking.
-
Build scalable and sustainable AI – Design AI systems that reduce reliance on massive compute clusters, making advanced models more efficient, accessible, and cost-effective.
-
Collaborate with a highly technical team – Join engineers and researchers who are deeply experienced, intellectually curious, and motivated by solving hard problems.
We’re looking for individuals who thrive in research-driven environments, value autonomy, and want to work on foundational AI challenges.
Top Skills
What We Do
Fortytwo is a decentralized AI network of small language models on everyday devices, collaborating to achieve scale and skill beyond centralized AI.