Current Statistics
1,607,645 Total Jobs 333,747 Jobs Today 16,998 Cities 222,734 Job Seekers 146,858 Resumes |
|
|
|
|
|
|
LLM Training Frameworks and Optimization Engineer - San Francisco California
Company: Tbwa Chiat/Day Inc Location: San Francisco, California
Posted On: 02/03/2025
LLM Training Frameworks and Optimization EngineerAbout UsAt Together.ai, we are building cutting-edge infrastructure to enable efficient and scalable training of large language models (LLMs). We focus on optimizing training frameworks, algorithms, and infrastructure to push the boundaries of AI performance, scalability, and cost-efficiency.We are seeking a LLM Training Frameworks and Optimization Engineer to drive innovations in the development and optimization of distributed training frameworks. In this role, you will ensure that our LLM training pipelines are robust, efficient, and capable of handling the complexities of large-scale distributed systems.Responsibilities - Framework Development and Optimization:
- Design, implement, and optimize distributed training frameworks tailored for large language models.
- Develop custom modules, plugins, and features to enhance framework scalability and performance.
- Algorithmic and Systems Optimization:
- Optimize communication patterns (e.g., gradient synchronization, all-reduce) in distributed training.
- Implement techniques like mixed precision, tensor parallelism, pipeline parallelism, and sharded training.
- Performance Tuning:
- Conduct in-depth profiling and debugging of training jobs to identify and resolve bottlenecks.
- Collaborate with hardware teams to optimize performance for GPUs, TPUs, and other accelerators.
- Scalability and Resilience:
- Ensure training systems scale efficiently to thousands of nodes and petabytes of data.
- Develop resilience mechanisms for fault-tolerant and checkpointed training pipelines.
- Collaboration and Support:
- Work closely with researchers, data engineers, and platform teams to ensure training frameworks meet model and workload requirements.
- Provide guidance and tools to improve the overall efficiency of the LLM development lifecycle.QualificationsMust-Have:
- Experience:
- 5+ years of experience in deep learning frameworks, distributed systems, or machine learning infrastructure.
- Technical Skills:
- Expertise in distributed training frameworks (e.g., PyTorch DDP, DeepSpeed, Megatron-LM, TensorFlow XLA).
- Strong understanding of parallelism techniques (e.g., data, tensor, pipeline, and ZeRO-based parallelism).
- Familiarity with GPU/TPU hardware and deep learning performance optimizations.
- Programming:
- Proficient in Python and C++ or CUDA for high-performance computing.
- Experience with memory optimization techniques (e.g., activation checkpointing, gradient sharding).
- Knowledge of training dynamics for large-scale LLMs, including hyperparameter tuning and optimization.
- Soft Skills:
- Analytical problem-solving skills and a focus on performance improvement.
- Strong collaboration and communication skills across teams.Nice-to-Have:
|
|
|
|
|
|
|