Artificial intelligence has gone from conference-paper abstraction to the defining category of software engineering hiring in a span of roughly three years. In 2023, "AI/ML engineer" was still a niche specialization found mostly at research labs and a handful of frontier companies. By early 2026, it has become the single highest-compensated discipline in the software industry, with an average salary of $202,837 across the 1,259 AI/ML-tagged positions in our index that include explicit salary data. That figure is not a ceiling. It is a midpoint, dragged upward by a cluster of roles at the very top of the market where base compensation alone exceeds $300,000.
To map the current state of AI hiring, we analyzed the full findjobs.dev index of 77,480 active software engineering job listings. Within that dataset, 1,757 jobs carry the AI/ML industry tag, but the true scope of AI-related hiring is significantly larger. When you include every listing that requires AI-adjacent technologies, including LLM integration, PyTorch, TensorFlow, OpenAI tools, and MLflow, the number of jobs where AI competence is a meaningful hiring criterion approaches 4,000. The field has spilled beyond its original boundaries. AI is no longer a department. It is an expectation.
What follows is an examination of who is hiring, what they are paying, which tools command the highest premiums, and a finding that may surprise anyone who assumes AI work is inherently remote-friendly: only one in four AI/ML positions offers fully remote work. The reasons for that are worth understanding, because they reveal something important about where the field is headed.
The AI Salary Stack
Average annual salary by AI-related technology. Based on listings with explicit compensation data.
The salary hierarchy within AI-related technologies tells a clear story about where value concentrates. At the top, roles requiring OpenAI tools and API integration average $236,618, a figure that reflects the premium companies place on engineers who can ship production applications built on frontier models. These are not research positions. They are product engineering roles where the core competency is wrapping model capabilities in reliable, scalable infrastructure. The $50,000 gap between OpenAI-tagged roles and general Python jobs ($186,770) is one of the most dramatic technology-specific premiums in our entire dataset.
PyTorch has decisively overtaken TensorFlow. This is evident in both volume and compensation. Our index contains 627 PyTorch roles with salary data versus 395 for TensorFlow, a 59% lead in job count. The salary gap is smaller but real: $218,936 for PyTorch versus $208,935 for TensorFlow, roughly a $10,000 annual premium. The shift has been gradual but is now structural. PyTorch's dominance in research contexts has percolated into production ML, particularly as tools like TorchServe and the broader PyTorch ecosystem have matured for deployment. TensorFlow remains entrenched in organizations with large existing codebases built on TF1/TF2, but new projects overwhelmingly default to PyTorch.
The LLM category is the broadest of the AI-specific tags, encompassing 1,028 roles with salary data at an average of $209,704. This includes everything from prompt engineering positions to fine-tuning specialists to engineers building retrieval-augmented generation (RAG) pipelines. The breadth of the category explains why its average sits below PyTorch and OpenAI: it includes junior-adjacent "LLM integration" roles alongside senior positions building custom training infrastructure. When filtered to senior-level positions only, LLM roles command averages closer to $230,000.
The AI salary premium is not about knowing AI in the abstract. It is about knowing how to ship AI in production. OpenAI tool integration pays $50,000 more than general Python. The market rewards execution over theory.
MLflow and the MLOps premium deserve particular attention. At $211,611 across 283 listings, MLflow-tagged roles pay almost exactly what LLM roles pay. This is not a coincidence. The bottleneck in AI adoption at most companies is not model development. It is operationalization: getting models into production, monitoring their performance, managing drift, and maintaining reproducibility across training runs. Engineers who bridge the gap between data science and production infrastructure, the MLOps layer, are commanding salaries that rival those of the model builders themselves. Spark's position at $194,242 reflects a similar dynamic: much of the Spark demand in 2026 is driven by ML pipeline construction rather than traditional big-data analytics.
Python anchors the bottom of this stack at $186,770, but that figure requires context. With 5,912 salary-reporting listings, Python's average is diluted by the sheer range of roles that require it, everything from basic backend development to quantitative research. When Python appears alongside ML-specific tools (PyTorch, LLM, MLflow), the effective salary is much higher. Python alone is a commodity. Python combined with production AI expertise is a premium skill set.
The Framework War: PyTorch vs. TensorFlow
How the two dominant ML frameworks compare across salary, job count, and market trajectory.
| Metric | PyTorch | TensorFlow | Difference |
|---|---|---|---|
| Avg. SalaryListings with salary data | $218,936 | $208,935 | +$10,001 |
| Job CountWith salary data | 627 | 395 | +59% more |
| Median Salary50th percentile | $215K | $200K | +$15K |
| Primary ContextMost common use case | Research & new projects | Legacy production & mobile | -- |
The PyTorch vs. TensorFlow comparison has evolved from a legitimate technical debate into what is now, by the numbers, a settled question. PyTorch leads on every metric that matters to an engineer evaluating where to invest their time: higher average salary ($219K vs. $209K), more open positions (59% more listings with salary data), and a higher median that is less distorted by outliers. The $15,000 median gap is particularly meaningful because it reflects the core of the market rather than the tails.
This does not mean TensorFlow expertise is worthless. Far from it. The 395 TensorFlow roles in our salary dataset represent real demand, much of it concentrated in organizations with significant existing investment in TensorFlow Serving, TFX pipelines, and TensorFlow Lite for mobile and edge deployment. Google's own ecosystem, including Android on-device ML, continues to rely heavily on TensorFlow. But the direction of travel is clear. New ML projects at startups and growth-stage companies overwhelmingly start with PyTorch, and the job market reflects that preference.
For engineers choosing where to specialize, the data suggests that PyTorch fluency is the higher-expected-value investment. The salary premium is real, the job market is larger, and the trajectory favors continued PyTorch dominance in research-to-production workflows. TensorFlow expertise remains valuable as a complement, particularly for engineers who need to work with mobile ML or maintain existing production systems, but it is no longer a sufficient foundation for an AI/ML career on its own.
The AI Remote Work Paradox
Despite being a tech-forward field, AI/ML has one of the lowest remote adoption rates in software engineering.
Here is a number that should give pause to anyone who assumes AI work happens entirely on laptops in coffee shops: only 25% of AI/ML positions in our index are fully remote. That is 969 out of 3,878 jobs, a rate that places AI/ML well below fintech (34% remote), developer tools (32%), and even general SaaS roles (29%). For a field that runs on code, models, and cloud compute, the relatively low remote adoption rate demands explanation.
Three forces push AI work toward physical offices. The first is hardware access. While cloud GPU instances are ubiquitous, many serious ML organizations maintain on-premise GPU clusters for cost efficiency at scale and for working with proprietary data that cannot leave controlled environments. Engineers who need to interact with custom hardware setups, debug distributed training runs across physical machines, or work in environments with air-gapped security requirements need to be on-site. This is particularly true at companies building foundation models, where training infrastructure is a competitive advantage that they are not willing to abstract behind a cloud API.
The second force is security and data governance. AI work frequently involves proprietary datasets, trade secrets embedded in model architectures, and in sectors like healthcare and defense, data subject to strict regulatory controls. Many organizations have determined that the risk profile of allowing remote access to these assets is unacceptable, even with VPNs, zero-trust networking, and endpoint management. The result is a blanket onsite requirement for anyone touching sensitive training data or production model weights.
The third factor is more pragmatic: collaboration density. ML development involves tight iteration loops between researchers, ML engineers, data engineers, and product teams. The ambiguity inherent in model development, where the "right" approach is often discovered through rapid experimentation rather than specified in advance, favors high-bandwidth communication. Many AI teams have concluded that the productivity cost of asynchronous collaboration outweighs the hiring-pool advantages of remote work, particularly when they are already paying salaries high enough to attract top talent locally.
AI remote rate by company size
The remote rate varies significantly with company maturity. AI startups with fewer than 50 employees offer remote work at roughly 40%, reflecting their need to access talent regardless of geography. Mid-stage companies (50-500 employees) drop to around 22% remote, as they build out physical lab infrastructure. Large enterprises with established AI divisions hover around 20%, constrained by security policies and existing office investments. The frontier labs (OpenAI, Anthropic, DeepMind) are notably onsite-heavy, requiring physical presence for most research and engineering roles.
Where AI/ML Sits in the Industry Salary Hierarchy
Average salary by industry vertical. AI/ML commands the second-highest compensation in software engineering.
At $203,000 average, AI and machine learning sits just below gaming ($211K) in the industry salary hierarchy and commands a $23,000 premium over general SaaS roles ($180K). This ranking may surprise readers who expected AI to claim the top spot outright. Gaming's lead is driven by a specific phenomenon: the convergence of graphics programming, simulation engineering, and increasingly, game-engine AI work that draws from a vanishingly small pool of engineers with combined expertise in real-time rendering and machine learning. The gaming premium is concentrated at a few large studios and engine companies rather than being distributed across a broad market.
The automotive sector at $200,000 reflects the autonomous vehicle and ADAS (Advanced Driver-Assistance Systems) hiring wave that has continued despite high-profile setbacks in the self-driving space. Companies like Waymo, Cruise's successor operations, and the major automakers' in-house AV divisions are paying AI/ML-adjacent salaries for engineers working on perception, planning, and simulation systems. This work sits at the intersection of AI and safety-critical systems engineering, a combination that commands a significant premium.
The $8,000 gap between AI/ML ($203K) and fintech ($195K) is meaningful because it represents the market's current assessment of relative scarcity. Fintech has been a top-paying industry for over a decade, with well-established compensation bands and a deep bench of experienced engineers. AI/ML compensation is being set by a market that is still in price-discovery mode, where the supply of experienced production ML engineers has not yet caught up with demand. This gap could narrow as more engineers complete the transition from traditional software roles into ML-focused work, or it could widen if the pace of AI adoption continues to accelerate faster than the talent pipeline can expand.
AI/ML has vaulted past fintech, cybersecurity, and developer tools to become the second-highest-paying industry vertical in software engineering. Three years ago, it was not even in the top five.
The OpenAI Ecosystem Effect
How one company's technology stack has created an entire salary tier.
No single company has reshaped the engineering labor market as dramatically as OpenAI has in the past two years. The impact is visible in two distinct channels. The first is direct employment: OpenAI itself lists 279 open positions in our index, making it one of the largest single-company presences in the dataset. Their engineering roles demand Python, Go, Kubernetes, and JavaScript, a stack that reflects an organization building cloud-scale infrastructure rather than purely doing research. These positions are overwhelmingly located in San Francisco and carry compensation that places them at the very top of the market.
The second, larger impact is the ecosystem effect. The 732 listings across hundreds of companies that specifically require OpenAI API and tooling experience have created what amounts to a new job category. These roles are distinct from general ML engineering. They involve building applications on top of OpenAI's models: designing prompt architectures, implementing function calling and tool use patterns, managing token economics, building evaluation frameworks, and handling the unique operational challenges of systems that depend on external model APIs. The $236,618 average salary for these roles reflects the fact that this is genuinely new expertise. There is no existing talent pipeline trained specifically in production GPT integration; every engineer in this space is either self-taught or has learned on the job within the past 18 to 24 months.
The 13% premium that OpenAI-tagged roles carry over general LLM positions reveals an important nuance. "LLM experience" has become a broad category that includes everything from experimenting with open-source models to deploying Claude or Gemini integrations. OpenAI-specific experience commands a premium because the OpenAI API is the most widely deployed LLM interface in production, and employers are willing to pay extra for engineers who already know its idiosyncrasies, rate limits, best practices for structured output, and the practical differences between model versions. This is a skills premium born of market timing: the first generation of engineers to build serious products on GPT-3.5 and GPT-4 are now the most experienced practitioners of a discipline that barely existed three years ago.
Who Is Hiring in AI
The largest AI-related employers by listing volume in the findjobs.dev index.
| Company | Listings | Key Tech | Notable |
|---|---|---|---|
| DatabricksData + AI platform | 332 | MLflow, Spark, Scala, Python, Java | Largest MLOps employer |
| OpenAIFrontier AI research | 279 | Python, Go, Kubernetes, JS | Mostly onsite (SF) |
| SpeechifyAI-powered consumer | 2,272 | LLM, Python, React, Node.js | AI-native product company |
The three companies that illustrate the spectrum of AI hiring most clearly are Databricks, OpenAI, and Speechify, each representing a fundamentally different model for how AI talent is deployed in production organizations.
Databricks, with 332 listings, is the prototypical AI infrastructure company. Their hiring reflects the reality that most enterprise AI work is not about building models from scratch. It is about building the platforms that enable model development, deployment, and monitoring at scale. Their technology stack, centered on MLflow, Spark, Scala, and Python, reads like a curriculum for MLOps engineering. Databricks' outsized hiring volume signals that the tools layer of the AI stack is where much of the growth is happening. For every engineer training a model, several more are needed to build the infrastructure that makes training possible, repeatable, and observable.
OpenAI at 279 listings represents the frontier research model, but their job postings tell a more nuanced story than the "AI lab" label suggests. A significant fraction of their open roles are in traditional software engineering disciplines: backend infrastructure, API reliability, developer experience, and platform security. The company has transitioned from a pure research lab to a product organization that needs the same kinds of engineers as any large-scale cloud service provider. The difference is that those engineers work in an environment where the core product is a model API that handles billions of tokens per day, which introduces unique scaling and reliability challenges.
Speechify's 2,272 listings represent a third model entirely: the AI-native consumer product company. While their total listing count dwarfs OpenAI and Databricks combined, this volume reflects a broader product organization that happens to be built on AI rather than a company that sells AI tools or services. This pattern, where AI is embedded in the product rather than being the product, is increasingly where AI hiring volume actually concentrates. For every OpenAI, there are dozens of Speechify-like companies that need engineers to integrate AI capabilities into consumer-facing applications across text-to-speech, content generation, and personalization.
The AI Salary Premium in Context
How AI/ML compensation compares to other major technology categories.
The most arresting comparison in the entire dataset is the $44,000 gap between LLM roles and JavaScript roles. Both are "software engineering." Both involve writing code, building systems, and shipping products. But the market has decided that the engineer who integrates a language model into a production application is worth $44,000 per year more than the engineer who builds the application's frontend. This is not a judgment about the intrinsic difficulty or importance of either kind of work. It is a price signal driven by supply and demand: there are far more engineers who can write competent React components than there are engineers who can build reliable, cost-effective LLM-powered features.
The OpenAI-versus-Python premium is even more revealing because it isolates the value of domain-specific expertise from language skill. Both categories of engineers write Python every day. The difference is entirely in what they build with it. An engineer who writes Python to build a Django REST API is competing in a labor market with millions of other Python developers worldwide. An engineer who writes Python to build a production RAG pipeline with streaming responses, function-calling orchestration, and cost-optimized model routing is competing in a market with perhaps tens of thousands of experienced practitioners. That 27% premium, $50,000 in real dollars, is what scarcity looks like when it meets urgent demand.
The MLOps-versus-DevOps comparison ($212K vs. $187K) captures a shift that has been underway for about two years but is now clearly visible in compensation data. Traditional DevOps, centered on CI/CD pipelines, container orchestration, and infrastructure-as-code, has become a mature discipline with a deep and growing talent pool. MLOps, which adds model versioning, experiment tracking, feature stores, and inference serving to the DevOps toolkit, is still new enough that experienced practitioners are scarce. The $25,000 premium reflects this gap. Engineers who can bridge both worlds, managing Kubernetes clusters and ML model registries with equal competence, are among the most valuable infrastructure hires in the market.
Finally, AI/ML's $13,000 edge over Rust marks a symbolic milestone. Throughout 2023 and much of 2024, Rust systems programming carried the highest per-language salary premium in the industry, driven by demand for memory-safe systems code in infrastructure, blockchain, and embedded contexts. AI/ML has now overtaken it. This does not diminish Rust's strong position; rather, it demonstrates the velocity of AI compensation growth. Notably, Rust is itself emerging in AI infrastructure contexts, with roles in high-performance inference engines and GPU kernel development that combine Rust expertise with ML domain knowledge. These hybrid roles, where they exist, command some of the highest salaries in the entire index.
A note on the sustainability of AI premiums
Salary premiums driven by scarcity tend to compress over time as the talent pool expands. The current AI premium is being sustained by two factors: the pace of new AI adoption across industries continues to accelerate faster than new engineers can gain production experience, and the field itself continues to evolve rapidly, meaning that "experienced" is a constantly moving target. An engineer who was expert in fine-tuning GPT-3.5 in early 2024 needed to learn entirely new approaches when GPT-4 Turbo and the function-calling paradigm shifted the landscape. This continuous obsolescence of specific knowledge keeps the effective supply of truly current AI engineers lower than the headline counts might suggest.
What This Means for Engineers
Practical career implications from the AI/ML hiring data.
The data points toward several actionable conclusions for engineers considering their career trajectory in 2026.
First, the highest returns are in applied AI, not research AI. The salary premium for production LLM integration ($210K average) is comparable to or higher than many ML research positions. This reflects a market truth that the academic pipeline has been slow to absorb: companies need engineers who can ship AI features, not just train models. The skills that command the highest compensation are decidedly practical: building reliable inference pipelines, managing model serving infrastructure, designing evaluation frameworks, and understanding the operational economics of API-based model consumption. An engineer who can reduce an LLM integration's token costs by 40% while maintaining output quality is solving a problem worth hundreds of thousands of dollars per year to their employer.
Second, MLOps is the most accessible on-ramp into AI salaries for experienced backend engineers. You do not need a PhD or even formal ML training to move into the $212K MLOps tier. If you already understand distributed systems, containerization, CI/CD, and data pipeline design, the additional knowledge required, experiment tracking, model versioning, feature stores, and inference serving, is learnable within six to twelve months of focused effort. MLflow, Spark, and the surrounding ecosystem are well-documented and accessible. The $25,000 premium over traditional DevOps is compensation for domain-specific knowledge that builds naturally on existing infrastructure expertise.
Third, be realistic about remote work in AI. If remote flexibility is a priority, understand that you are filtering out 75% of the AI/ML job market. The 25% that is remote tends to be concentrated in application-layer AI integration (building products on top of model APIs) rather than core ML engineering or infrastructure work. Engineers who want remote AI work should focus on the OpenAI/Anthropic API integration layer, where the work is inherently cloud-native and does not require access to proprietary hardware or restricted data environments.
Fourth, learn PyTorch, not TensorFlow, if you are starting from scratch. The data is unambiguous: PyTorch has more jobs, higher salaries, and momentum. If you already know TensorFlow, your skills remain marketable, but investing new learning time into TensorFlow in 2026 has a lower expected return than the same investment in PyTorch. The exception is mobile and edge ML, where TensorFlow Lite retains a significant presence.
Finally, understand that the AI salary premium is a snapshot, not a guarantee. The engineers commanding $237K for OpenAI integration work today are benefiting from a temporary supply-demand imbalance. That imbalance will eventually compress, just as the cloud computing premium compressed over the past decade. The engineers who will sustain high compensation over the long term are those who continuously deepen their expertise at the frontier rather than resting on a single skill that was scarce in 2025. The market rewards currency, not credentials.
Methodology
This analysis draws on the findjobs.dev index of 77,480 active software engineering job listings aggregated from 21 applicant tracking systems and company career pages across 214 countries. AI/ML-related jobs were identified through two complementary methods: the industry tag "ai-ml" applied by our fingerprinting system (1,757 jobs), and technology-specific tags including LLM, PyTorch, TensorFlow, OpenAI, MLflow, and Spark.
Salary figures represent base compensation only. All salary data comes from listings with explicit employer-provided compensation information. We do not estimate or impute salaries. Figures are reported in USD, with non-USD listings converted at the exchange rate at time of indexing. Equity, bonuses, and benefits are excluded. Where a listing provides a range, we use the midpoint for aggregation.
The "AI/ML average salary" of $202,837 is calculated across the 1,259 listings that carry both the AI/ML industry tag and explicit salary data. Technology-specific averages (OpenAI, PyTorch, etc.) are calculated across all listings tagged with that technology, regardless of industry classification, which means some overlap exists between categories. Remote work percentages are calculated across all AI/ML-tagged jobs regardless of salary data availability.