Loading...
Loading...
Pocatello punches above its population because Idaho State University sits in the city, operating one of the largest energy-systems research programs in the Pacific Northwest and maintaining close partnerships with Idaho National Laboratory. ISU's College of Engineering and the Power Engineering Program feed researchers into both INL and regional utility operators. For automation teams, Pocatello is a microcosm of academic-plus-operational complexity: you have university research workflows that need data standardization and reproducibility audit trails, and you have utility operations that demand real-time responsiveness to grid events. That combination — disciplined academic workflows plus high-stakes operational workflows — attracts automation partners who understand both research compliance and field-service operations. LocalAISource connects Pocatello university and utility operations teams with automation specialists experienced in research-data pipelines, grid operations, and academic compliance frameworks.
Updated May 2026
ISU's energy-systems research generates massive datasets: field measurements from distribution systems, lab experiments on new storage technologies, simulation results from grid-stability models. Individual research teams often use their own data formats, scripts, and documentation standards, making it nearly impossible to aggregate findings or reproduce results years later. Research-data automation here is not about RPA bots clicking screens — it is about building workflow middleware that ingests messy research data, validates it against a standard schema, flags anomalies (out-of-range values, missing metadata, format errors), and either auto-corrects or routes to a data steward for review. For academic publication and grant audits, that middleware creates an audit trail that satisfies funding agencies and journal reviewers.
Pocatello Power operates distribution lines across rural terrain where weather events and wildlife damage are frequent. When a line goes down, the company needs to dispatch field crews, coordinate with contractors, log the outage in regulatory filings, and communicate status to customers. Manual coordination via phone and email is error-prone and slow. RPA can automate the intake: when a SCADA alert fires (circuit breaker trip or voltage drop), trigger an RPA flow that logs the event, queries the GIS system for affected customers, initiates a crew dispatch, and sends customer notifications. The partner integrates the outage-management system (OMS), the crew-dispatch system, the customer-notification platform, and the SCADA historian into a single workflow that reduces mean-time-to-repair (MTTR) and improves customer communication.
ISU's research proposals go through a complex approval chain: principal investigator drafts the proposal, department reviews for technical merit, compliance office checks for human subjects / environmental / safety concerns, and grants office validates budget and funder requirements. Traditional serial routing means a proposal spends 4-6 weeks in review even if each step only takes 2-3 days. RPA can parallelize: once the PI submits, route the proposal to all reviewers simultaneously, aggregate feedback, flag conflicts or missing reviews, and escalate when something is stuck. For a university with 50+ ongoing research initiatives, that coordination automation is a game-changer for time-to-funding and researcher satisfaction.
You cannot force scientists to change how they collect or process data — that kills buy-in. Instead, build the standardization layer downstream: accept messy inputs, validate and transform into a standard schema using automated rules, and only escalate to a human if the transformation fails. For example, if a field measurement comes in as a CSV with non-standard column names, use fuzzy matching to map it to the standard schema. If it fails, flag it for the data steward to correct, not the scientist. That approach minimizes friction on the research team and concentrates the standardization burden on a dedicated data platform.
Direct ROI is hard to quantify, but the operational gains are real: faster outage notification (2-3 minutes vs. 15-20 minutes for manual calls), fewer missed customer notifications, and reduced MTTR by 10-20% because crews are dispatched immediately instead of waiting for a manual call. For a utility, reducing MTTR by an hour on a major outage that affects 1,000 customers can be worth $10k-50k in avoided regulatory penalties and customer switching. That math justifies a $50-100k RPA investment over 12-18 months. The secondary benefit is data: you now have a clean log of every outage event, which is valuable for grid-resilience planning.
SCADA systems are air-gapped from the internet for safety reasons, and rightly so. You cannot bridge that gap with direct RPA. The safe pattern is a one-way data export: the SCADA system exports data (via a secure, manually-controlled process) to an internal historian, and your RPA automation reads from that historian, not from SCADA directly. For ISU-side research workflows, you can push aggregated SCADA-derived insights (e.g., 'grid stability metrics for the Pocatello region') into the research data pipeline without exposing the operational system.
That data should flow through the pipeline just like successful experiments — the negative result is valuable for reproducibility and for other researchers. The RPA should not filter or exclude failed experiments based on hardcoded rules. Instead, tag the data with metadata (success/failure, confidence level, notes), and let the researcher decide whether to include it in aggregations. A good data pipeline makes it easy for researchers to filter results downstream, not in the automation layer.
The RPA itself is not subject to federal compliance — it is a procedural tool. But the underlying proposal process is subject to your institution's policies and funder requirements. So, build the RPA to enforce those policies: require all mandatory compliance reviews before submission, validate that the budget matches funder guidelines, and flag proposals that exceed deadline windows. Involve your compliance office in the design phase so they understand how the RPA routes proposals and can audit the logs later. Never try to automate away a compliance requirement — that is where RPA projects fail in academic settings.
Get listed and connect with local businesses.
Get Listed