Current Statistics
1,547,435 Total Jobs 263,493 Jobs Today 17,681 Cities 222,734 Job Seekers 146,855 Resumes |
|
|
|
|
|
|
Founding AI Frameworks Engineer - San Francisco California
Company: TensorLake Inc. Location: San Francisco, California
Posted On: 01/17/2025
Tensorlake is building a distributed data processing platform for developers building Generative AI applications. Our product, Indexify(), enables building continuously evolving knowledge bases and indexes for Large Language Model applications by allowing structured data or embedding extraction algorithms on any unstructured data.We are building a server-less product on top of Indexify that allows users to build real-time extraction pipelines for unstructured data. The extracted data and indexes would be directly consumed by AI Applications and LLMs to power business and consumer applications.As an AI Frameworks Engineer, you will be responsible for optimizing our AI infrastructure, developing high-performance inference engines, and maximizing GPU utilization. You'll work on the critical backend architecture that powers our platform's scalability and performance, collaborating with both researchers and product engineers to ensure Tensorlake's models run efficiently on a variety of hardware configurations.ResponsibilitiesAs an AI Frameworks Engineer, your focus will be on optimizing and building high-performance AI systems. You will: - Design and build custom inference engines optimized for high throughput and low latency.
- Optimize GPU usage across our platform, ensuring that deep learning models run efficiently at scale.
- Write and optimize custom CUDA kernels and other low-level operations to accelerate deep learning workloads.
- Develop and implement techniques for model compression, including quantization and pruning, to make models more efficient for real-world deployment.
- Collaborate with research scientists and engineers to integrate new models into Tensorlake's platform while ensuring peak performance.
- Utilize cuDNN, cuBLAS, and other GPU-accelerated libraries to optimize computational workloads.
- Troubleshoot and debug performance bottlenecks using tools like nvprof and Nsight, and implement fixes to improve throughput and memory usage.
- Work on scaling AI models to multiple GPUs and nodes using NCCL and other parallel computing techniques.Basic Qualifications
|
|
|
|
|
|
|