Loading...
Loading...
Farmington is the economic center of the San Juan Basin, one of the most prolific oil and natural gas regions in the United States. The region produces significant volumes of oil, natural gas, and coalbed methane, with infrastructure including pipelines, compressor stations, production wells, and extensive subsurface data from decades of exploration. Large operators like ExxonMobil, Chevron, and smaller independent producers maintain significant operations and offices in Farmington. Custom AI development in Farmington serves energy infrastructure and exploration applications: predicting well productivity, optimizing production allocation, predicting equipment failures in compressor stations and pipelines, and interpreting seismic and wellbore data using machine learning. The work requires understanding petroleum engineering, reservoir characterization, and the massive datasets (seismic surveys, well logs, production history, pressure and temperature records) that energy companies accumulate. But the economics are compelling: a machine-learning model that improves well productivity estimates by five to ten percent can unlock millions of dollars in additional production; a predictive maintenance model that prevents a pipeline failure avoids cleanup costs and environmental liability. Custom AI development in Farmington requires domain expertise in energy engineering and the ability to handle large, complex subsurface datasets. LocalAISource connects Farmington energy operators, infrastructure companies, and exploration firms with custom AI developers experienced in oil and gas applications.
Updated May 2026
The majority of Farmington custom AI projects serve well productivity prediction and reservoir characterization. The first use case is well productivity prediction: training a model on well log data, core samples, production history, and pressure data to predict the productivity (peak production rate, total recovery) of a prospective well before drilling. This allows operators to prioritize drilling locations, design wells optimally, and allocate capital efficiently. These projects run fourteen to twenty-four weeks, cost sixty to one hundred fifty thousand dollars, and require integration with subsurface data platforms (Petrel, Geoframe, Aspen). The second use case is production allocation and decline prediction: training a model on historical production data to predict production rates at different pressure drawdowns, then optimizing the allocation of production capacity across the portfolio of wells and compressor stations. These projects are twelve to twenty weeks and cost fifty to one hundred twenty thousand dollars. The third use case is seismic interpretation and structural mapping: training a neural network on interpreted seismic surveys and well control data to interpret new seismic surveys faster and more consistently than human interpretation. This is data-intensive (twenty to thirty-six weeks) and costs one hundred to three hundred thousand dollars but can accelerate subsurface characterization significantly.
Custom AI development in Farmington differs from commercial AI development by the massive scale and complexity of subsurface data and the importance of uncertainty quantification. A seismic survey of a large lease might consist of fifty thousand individual seismic traces; a well log might contain fifty different measurements (gamma ray, resistivity, density, sonic velocity, etc.). The custom AI development project must integrate these disparate data sources, handle missing or conflicting data (wells have logs at some depths but not others; seismic coverage has gaps), and produce predictions with quantified uncertainty. Uncertainty quantification is critical in exploration: an operator making a drilling decision based on a well productivity prediction needs to know not just the expected productivity but the range of likely outcomes (10th percentile, 50th percentile, 90th percentile). A capable custom AI partner will include Bayesian uncertainty quantification or ensemble methods that provide confidence intervals on predictions. This adds complexity and timeline (fifteen to twenty percent additional) but is essential for sound decision-making.
A secondary category of Farmington custom AI projects involves predictive maintenance for energy infrastructure: compressor stations, pipeline networks, and processing facilities. A typical project involves training a model on equipment sensor data (vibration, temperature, pressure), maintenance history, and environmental data (ambient temperature, dust/salt exposure for coastal or desert installations) to predict equipment failures. Early detection of bearing wear, corrosion, or other failures allows maintenance to be scheduled before failure, reducing downtime and emergency repairs. These projects are twelve to eighteen weeks and cost forty to ninety thousand dollars. They require integration with SCADA (Supervisory Control and Data Acquisition) systems and maintenance management systems. The payoff is concrete: preventing one critical failure (a compressor failure or pipeline rupture) can save millions in lost production, environmental cleanup, or emergency response.
Well productivity depends on multiple factors: petrophysical properties (porosity, permeability, fluid saturation) inferred from well logs; structural factors (depth, pressure, temperature); and completion design (perforation density, stage spacing, proppant type for unconventional wells). A custom AI model learns patterns from offset wells (wells already drilled) where actual production is known, then predicts productivity for prospect wells where production is hypothetical. Structure your training data as follows: one row per well, with columns for well log measurements (gamma ray, resistivity, density, sonic velocity, dipole sonic, etc.), core analysis results (if available), completion design parameters, and actual production metrics (peak rate, cumulative production at fixed time). Include at least fifty to one hundred offset wells in the training set, covering a range of production outcomes. A custom AI development partner will help you extract and integrate this data from your subsurface database (Petrel, Geoframe, or similar).
Seismic interpretation is computationally intensive and data-hungry. A typical project costs one hundred fifty to three hundred thousand dollars and takes twenty to thirty-six weeks. The cost drivers are the size of the seismic survey (larger surveys = more data to process), the complexity of the geology (complex structural or stratigraphic settings require more training data), and the amount of well control data available for training. A smaller project focused on a specific play (e.g., identifying a particular stratigraphic horizon) might cost eighty to one hundred fifty thousand dollars and take twelve to twenty weeks. A larger project covering multiple plays and structural interpretations could cost three hundred to six hundred thousand dollars and take two to three years. Ask your partner: how much seismic data do you need? How many well control points are required? Can you deliver results in phases (first a quick pilot on a subset of the survey, then full-scale interpretation)?
Validation requires comparing model predictions to actual well performance on a held-out set of offset wells. Use a stratified holdout strategy: separate out ten to fifteen percent of your offset wells (preferably recent wells that represent current drilling practices), then train the model on the remaining eighty-five to ninety percent. Make productivity predictions for the held-out wells, then compare to actual production. Measure root-mean-square error (RMSE) or mean absolute percentage error (MAPE). If RMSE is ten to twenty percent of average well productivity, that is reasonable for exploration prediction. Validate across different well types (vertical, deviated, horizontal) and different formations if your portfolio covers multiple plays. A model validated on vertical wells may not be reliable for horizontal wells unless the training data includes both. Use the validation results to establish confidence in the model before making high-stakes drilling decisions.
Traditional decline curve analysis (using exponential, hyperbolic, or power-law decline functions) is well-established and has decades of historical validation. Custom AI models (neural networks, gradient boosting) can sometimes provide more accurate predictions by learning nonlinear patterns that traditional methods miss. However, AI models are black boxes; explaining why a specific prediction was made is harder than with explicit decline functions. In practice, a hybrid approach is often best: use traditional decline curve analysis as a baseline, then train an AI model to predict the residuals (actual production minus traditional forecast). If the AI model can predict residuals more accurately than random noise, then it adds value. Validate both approaches on historical production data, then decide whether the added complexity of the AI model is justified by the improvement in accuracy.
Production models should be retrained monthly or quarterly as new production data arrives. Equipment failure prediction models should be retrained quarterly or semi-annually as new sensor and maintenance data accumulates. More frequent retraining (weekly) is not necessary and may lead to overfitting — the model learns noise in recent data rather than stable underlying patterns. Implement automated retraining: a script pulls new production and sensor data, retrains the model, validates performance against recent test data, and deploys if validation passes. Plan for two to four thousand dollars per year for model maintenance and retraining after the initial development phase. Retrain after significant operational changes (new completion design, new equipment type, new pressure regime) even if the scheduled retraining window has not elapsed.
Browse verified professionals in Farmington, NM.