We do not work with third-party recruiters or staffing agencies.
Key Responsibilities Platform Design & Development Architect, develop, and deploy core modules (Data, Access, & Agent’s) for end-to-end data ingestion, contextualization, and visualization.
Design and code sensor collection agents across heterogeneous systems (Windows, Linux, macOS, mobile, IoT).
Implement real-time ingestion pipelines using technologies like Apache Kafka, Apache NiFi, Redis Streams, or AWS Kinesis .
Persist and query multi-modal data across time-series (MongoDB, InfluxDB, TimescaleDB), graph (Neo4j), and vector databases (Qdrant, FAISS, Pinecone, or Weaviate) .
API & Data Access Layer Build secure, scalable RESTful and GraphQL APIs for exposing platform data models, sensor configuration, and reporting.
Implement a unified Database Access Layer (DBAL) to abstract query logic across multiple databases.
Experiment with or extend Model Context Protocol (MCP) or a similar standardized data interchange for multi-DB, multi-agent interoperability.
System Integration & Data Streaming Develop low-latency data pipelines for transporting and transforming event streams (syslog, telemetry, keystrokes, IoT feeds, cloud service logs).
Collaborate with frontend engineers to connect Access (visual mapping UI) with back-end pipelines.
Optimization & Scalability Optimize database query performance using down-sampling, partitioning, and caching techniques.
Design solutions for horizontal scaling and containerized deployment (Docker, Kubernetes, OpenShift) .
Apply a “MacGyver-mindset” for rapid prototyping and iterative refinement under real-world constraints.
Collaboration & Mentoring Work directly with compliance officers, security analysts, and business process owners to refine data models for regulatory and operational needs.
Conduct code reviews, mentor junior developers, and promote best practices across the team.
Required Skills & Experience Programming : Strong proficiency in Node.js and Python (C++ a plus).
Streaming : Hands-on experience with Kafka, NiFi, Redis Streams, or AWS Kinesis .
Databases : Time-series : MongoDB, InfluxDB, TimescaleDB, or AWS Timestream Graph : Neo4j (Cypher, APOC, graph schema design) Vector : Qdrant, FAISS, Pinecone, or Weaviate AI / Agents : Experience with—or strong interest in— Agentic AI frameworks , multi-agent orchestration, and context-aware data processing.
Data Interchange : Familiarity with MCP-like protocols or interest in defining standardized APIs for cross-database access.
Cloud / Infra : AWS, Azure, or GCP with containerization (Docker, Kubernetes).
Software Engineering : Strong grasp of algorithms, distributed systems, microservice design, and API security .
Problem Solving : Strong debugging skills, creative mindset, and ability to balance speed with scalability.
Preferred Skills Machine Learning / NLP integration into multi-modal pipelines.
CI / CD automation and DevOps practices.
Knowledge of enterprise integration patterns, event-driven systems, and zero-trust security models .
Experience with compliance frameworks (NERC CIP, FedRAMP, GDPR, SOX).
Qualifications Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent hands-on experience).
5+ years professional software development with data-intensive or AI-driven systems.
Proven experience designing, deploying, and scaling modular platforms in production.
We do not work with third-party recruiters or staffing agencies.
Powered by JazzHR
Senior • San José, San José Province, CR