AI Safety & Transparency Engineer

Impac Exploration Services Inc β€’ Houston, TX

Company

Impac Exploration Services Inc

Location

Houston, TX

Type

Full Time

Job Description

AI Safety & Transparency Engineer Division:DATUM, Impac Exploration Services Location:Remote, Oklahoma City (OK), Houston (TX) Type:Full-Time Build AI That Earns Trust by Deserving It We're not building black boxes. At DATUM, our AI makes decisions about critical infrastructure where "trust me, it works" isn't good enough. When our models predict something, operators need to understand why. When they make recommendations, engineers need to see the reasoning. We need someone who believes AI transparency isn't a nice-to-haveβ€”it's the whole game. Someone who gets excited about making complex models explainable without dumbing them down. Who sees safety not as constraints, but as design principles that make AI actually useful in the real world. What You'll Build β€’ Explainability frameworks that show why models make specific predictions β€’ Safety systems that catch edge cases before they reach production β€’ Transparency tools that build operator confidence in AI decisions β€’ Testing protocols for AI in high-stakes industrial environments β€’ Interpretability methods for complex multimodal models β€’ Trust metrics that actually measure what matters Your North Star β€’ If an operator can't understand it, we haven't finished building it β€’ The best AI safety happens at design time, not as band-aids β€’ Transparency means glass box, not black box with documentation β€’ Industrial AI has different stakes than consumer AIβ€”act like it β€’ Trust is earned through understanding, not compliance checkboxes β€’ The most ethical AI is AI that actually gets used safely The Technical Challenge β€’ Make deep learning interpretable without sacrificing performance β€’ Build explainability for decisions that combine physics, sensors, and ML β€’ Create safety frameworks for environments where "undo" doesn't exist β€’ Design transparency for users who are experts in drilling, not data science β€’ Develop testing protocols for conditions you can't replicate in a lab β€’ Balance "why did it do that?" with "what should I do now?" You're Our Person If β€’ You've made complex AI systems interpretable for non-technical users β€’ You believe safety and performance aren't trade-offs β€’ You can translate between ML researchers and field operators β€’ You see AI ethics as engineering challenges, not philosophy debates β€’ You've built explainability that actually explains β€’ You understand that industrial safety is different from consumer safety Especially If β€’ You've worked on high-stakes AI (healthcare, autonomous systems, finance) β€’ You've built interpretability for multimodal or time-series models β€’ You understand both ML theory and human factors β€’ You've designed AI systems that passed rigorous safety audits β€’ You can make uncertainty quantification intuitive β€’ You've turned "responsible AI" from concept to code Why This Role Matters Your work enables: β€’ Operators to trust AI recommendations with million-dollar consequences β€’ Regulatory acceptance of AI in critical infrastructure β€’ Our research team to push boundaries while staying grounded β€’ A new paradigm for industrial AI that's open, not opaque This isn't about checking compliance boxes. It's about building AI that deserves to be trusted with decisions that matter. The Reality Industrial AI safety is uncharted territory. You'll: β€’ Define standards that don't exist yet β€’ Build explainability for users who've never trusted algorithms β€’ Create safety frameworks for physics-based problems β€’ Navigate between "move fast" and "don't break things that matter" β€’ Translate academic interpretability research to industrial reality Growth Path Start: Build transparency into our existing models Six months: Define how industrial AI safety should work. One year: Publish frameworks others adopt When companies realize they need industrial AI ethics expertise, you'll have written the playbook they're trying to follow. The Challenge You'll make AI that's powerful enough to transform industries and transparent enough for a roughneck to trust it at 3 AM. You'll prove that explainable doesn't mean weak, and safe doesn't mean slow. Ready to Open the Black Box? Show us explainability work that actually helped real users. Tell us about safety systems you've built for complex environments. Share your vision for industrial AI that's both powerful and trustworthy. We're looking for someone who sees "it's too complex to explain" as a challenge, not an excuse.
Apply Now

Date Posted

07/22/2025

Views

0

Back to Job Listings ❀️Add To Job List Company Info View Company Reviews
Neutral
Subjectivity Score: 0

Similar Jobs

Medical Assistant I/II/III-OB/GYN-Kelsey Seybold Clinic: Northwest Campus (Cypress) - Kelsey Seybold Clinic

Views in the last 30 days - 0

View Details

Licensed Vocational Nurse (LVN) I/II/III-Family Medicine| Kelsey Seybold Clinic- Harris County - Kelsey Seybold Clinic

Views in the last 30 days - 0

View Details

Medical Assistant I, II, III-Pediatrics-Kelsey Seybold Clinic - Gulfgate - Kelsey Seybold Clinic

Views in the last 30 days - 0

View Details

Maternal Fetal Medicine Physician - Pediatrix Medical Group

Views in the last 30 days - 0

View Details

Registered Nurse (RN) - Cardiology (Kelsey-Seybold Clinic - Main Campus) - Kelsey Seybold Clinic

Views in the last 30 days - 0

View Details

Licensed Vocational Nurse (LVN) I/II/III - Cardiology - Kelsey Seybold Clinic - Main Campus - Kelsey Seybold Clinic

Views in the last 30 days - 0

View Details