Data Architect

Data Architect

Starts ASAP

The Data Architect is responsible for providing leadership in the architecture, technical design and enabling design for scale using Big-data and relational database technologies; drive pragmatic solutions working with a team ensuring the highest quality software, transparency and predictability of releases, and ensure adherence to the agile principles to facilitate customer value and time to market.

Location(s): Bowie, MD

Type: Contractor, duration 6 months

Responsibilities:

•             Work with staff members to develop pragmatic and detailed data architecture to support continuing evolution of the company’s data architecture

•             Support technical architecture and design of systems in a way that ensures compliance with certification, health care regulatory standards, and a technology based information security program;

•             Ensure and influence teams follow SDLC best practices regarding coding standards, code reviews, and testing (including unit, integration, and system test);

•             Assess technological options and design offerings supporting scalable, high-performance, and highly available environments;

•             Participate with Company leadership in the strategic development of technology initiatives to identify product and system enhancements which will improve customer and stakeholder value;

•             Partner with Engineers and architects to ensure developed solutions adhere to established best patterns and our architectural target state;

•             Provide technical thought leadership towards solving problems for the team;

•             Drive the adoption of key engineering best practices to improve quality and reliability of team’s deliverables;

•             Coordinate and communicate with senior and executive management to ensure goals are met within budget;

•             Collaborate with other technologists on creating cross-domain solutions

Qualifications:

•             Bachelor degree (or higher) in Computer Science, MIS, IT, or related field;

•             10 years of experience in developing large-scale, distributed data-centric solutions with at least 5 years with Java or C#;

•             10 years of experience in leading the development and delivery of Big Data, Data Warehousing, and Data Integration solutions;

•             3 years of software development experience in Hadoop environment (Hive, Pig, HBase);

•             3 years of experience using open source message broker software such as RabbitMQ and/or Kafka;

•             Knowledge or working experience of Data Processing Algorithms and Design Patterns;

•             Experience working with big data technologies and high volume transactional systems ;

•             Strong understanding of ETL process design and automation and recent relevant work experience;

•             Strong understanding of database design and development;

•             Strong understanding of architecture patterns and operational characteristics of highly available and scalable applications;

•             Working knowledge of self-service experiences and open source web application technology stack is a plus;

•             Experience with persisting data in one or more Relational and NoSQL DB technologies such as MS SQL Server, MongoDB, Cassandra, CouchDB or Postgres;

•             Experience with API gateways and authentication technologies, such as OAuth2 and SAML;

•             Experience building service-oriented architectures and proficiency with distributed systems built on the cloud (AWS, Azure, Google Cloud)

•             Experience building large-scale web services, microservices based applications in the cloud environment is a plus;

•             Passion for security and a strong understanding of threats, vulnerabilities and compliance standards;

•             Experience participating and leading code reviews, refactoring, gathering code quality metrics and measurements;

•             Fluency with modern DevOps automation toolchains, the AWS ecosystem, and modern containerization, orchestration, and virtualization technologies;and

•             Experience using source control tools – Microsoft Team Foundation Server, and/or Github is a plus, but not required.

Specific goals over the 3 months will be as follows:

Review the current state design of data stores, infrastructure, data models, data mappings and consumption interfaces

•             High level and detailed design of end to end data flows and data processes, for data stores

•             Topology diagrams of data stores, including servers and network components

•             High level and detailed data models of all stores, including relational and NoSQL data stores

•             Data mappings and lineage for data

•             Consumption layer – interface specifications for batch and real time access

To apply for this job email your details to jeremy.schulman@spartantech.net