Job Title | Location | Description | Posted** |
---|---|---|---|
Lead Data Engineer – (Remote – Latin America)
Bertoni Solutions |
Remote Costa Rica
|
Company Description We are a multinational team of individuals who believe that with the right knowledge and approach technology is the answer to the challenges businesses face today. Since 2016 we have brought this knowledge and approach to our clients helping them translate technology into their success. With Swiss roots and our own development team in Lima and across the region we offer the best of both cultures: the talent and passion of Latin American professionals combined with the organizational skills and Swiss mindset. Job Description We are seeking a highly skilled Lead Data Engineer with strong expertise in PySpark SQL and Python Azure Data Factory Synapse Databricks and Fabric as well as a solid understanding of ETL and data warehousing end to end principles. The ideal candidate will have a proven track record of designing building and maintaining scalable data pipelines in a collaborative and fast-paced environment. Key Responsibilities: Design and develop scalable data pipelines using PySpark to support analytics and reporting needs. Write efficient SQL and Python code to transform cleanse and optimize large datasets. Collaborate with machine learning engineers product managers and developers to understand data requirements and deliver solutions. Implement and maintain robust ETL processes to integrate structured and semi-structured data from various sources. Ensure data quality integrity and reliability across pipelines and systems. Participate in code reviews troubleshooting and performance tuning. Work independently and proactively to identify and resolve data-related issues. Contribute to Azure-based data solutions including ADF Synapse ADLS and other services. Support cloud migration initiatives and DevOps practices. Provide guidance on best practices and mentor junior team members when needed. Qualifications 8+ years of overall experience working with cross-functional teams (machine learning engineers developers product managers analytics teams). 3+ years of hands-on experience developing and managing data pipelines using PySpark. 3 to 5 years of experience with Azure-native services including Azure Data Lake Storage (ADLS) Azure Data Factory (ADF) Databricks Azure Synapse Analytics / Azure SQL DB / Fabric. Strong programming skills in Python and SQL. Solid experience doing ETL processes and data modeling/data warehousing end to end solutions. Self-driven resourceful and comfortable working in dynamic fast-paced environments. Advanced written and spoken English is a must have for this position (B2 C1 or C2 only). Strong communication skills is a must. Nice to have: Databricks certification. Knowledge of DevOps CI/CD pipelines and cloud migration best practices. Familiarity with Event Hub IoT Hub Azure Stream Analytics Azure Analysis Services and Cosmos DB. Basic understanding of SAP HANA. Intermediate-level experience with Power BI. Additional Information Please note that we will not be moving forward with any applicants who do not meet the following mandatory requirements: 3+ years of experience with PySpark/Python ETL and data warehousing processes Azure data factory Synapse Databricks Azure Data Lake Storage Fabric Azure SQL DB etc. Proven leadership experience in a current project or previous projects/work experiences. Advanced written and spoken English fluency is a MUST HAVE (from B2 level to C1/C2) MUST BE located in Central or South america as this is a nearshore position (Please note that we are not able to consider candidates requiring relocation or those located offshore). More Details: Contract type: Independent contractor (This contract does not include PTO tax deductions or insurance. It only covers the monthly payment based on hours worked). Location: The client is based in the United States however the position is 100% remote for nearshore candidates located in Central or South America. Contract/project duration: Initially 6 months with extension possibility based on performance. Time zone and working hours: Full-time Monday to Friday (8 hours per day 40 hours per week) from 8:00 AM to 5:00 PM PST (U.S. time zone). Equipment: Contractors are required to use their own laptop/PC. Start date expectation: As soon as possible. Payment methods: International bank transfer PayPal Wise Payoneer etc. Bertoni Process Steps: Requirements verification video interview. Partner/Client Process Steps: CV review. 1 Technical video interview with our partner. 1 or 2 video interviews with the end client. Why Join Us? Be part of an innovative team shaping the future of technology. Work in a collaborative and inclusive environment. Opportunities for professional development and career growth.
|
|
Lead Data Engineer – (Remote – Latin America)
Bertoni Solutions |
Remote Colombia
|
Company Description We are a multinational team of individuals who believe that with the right knowledge and approach technology is the answer to the challenges businesses face today. Since 2016 we have brought this knowledge and approach to our clients helping them translate technology into their success. With Swiss roots and our own development team in Lima and across the region we offer the best of both cultures: the talent and passion of Latin American professionals combined with the organizational skills and Swiss mindset. Job Description We are seeking a highly skilled Lead Data Engineer with strong expertise in PySpark SQL and Python Azure Data Factory Synapse Databricks and Fabric as well as a solid understanding of ETL and data warehousing end to end principles. The ideal candidate will have a proven track record of designing building and maintaining scalable data pipelines in a collaborative and fast-paced environment. Key Responsibilities: Design and develop scalable data pipelines using PySpark to support analytics and reporting needs. Write efficient SQL and Python code to transform cleanse and optimize large datasets. Collaborate with machine learning engineers product managers and developers to understand data requirements and deliver solutions. Implement and maintain robust ETL processes to integrate structured and semi-structured data from various sources. Ensure data quality integrity and reliability across pipelines and systems. Participate in code reviews troubleshooting and performance tuning. Work independently and proactively to identify and resolve data-related issues. Contribute to Azure-based data solutions including ADF Synapse ADLS and other services. Support cloud migration initiatives and DevOps practices. Provide guidance on best practices and mentor junior team members when needed. Qualifications 8+ years of overall experience working with cross-functional teams (machine learning engineers developers product managers analytics teams). 3+ years of hands-on experience developing and managing data pipelines using PySpark. 3 to 5 years of experience with Azure-native services including Azure Data Lake Storage (ADLS) Azure Data Factory (ADF) Databricks Azure Synapse Analytics / Azure SQL DB / Fabric. Strong programming skills in Python and SQL. Solid experience doing ETL processes and data modeling/data warehousing end to end solutions. Self-driven resourceful and comfortable working in dynamic fast-paced environments. Advanced written and spoken English is a must have for this position (B2 C1 or C2 only). Strong communication skills is a must. Nice to have: Databricks certification. Knowledge of DevOps CI/CD pipelines and cloud migration best practices. Familiarity with Event Hub IoT Hub Azure Stream Analytics Azure Analysis Services and Cosmos DB. Basic understanding of SAP HANA. Intermediate-level experience with Power BI. Additional Information Please note that we will not be moving forward with any applicants who do not meet the following mandatory requirements: 3+ years of experience with PySpark/Python ETL and data warehousing processes Azure data factory Synapse Databricks Azure Data Lake Storage Fabric Azure SQL DB etc. Proven leadership experience in a current project or previous projects/work experiences. Advanced written and spoken English fluency is a MUST HAVE (from B2 level to C1/C2) MUST BE located in Central or South america as this is a nearshore position (Please note that we are not able to consider candidates requiring relocation or those located offshore). More Details: Contract type: Independent contractor (This contract does not include PTO tax deductions or insurance. It only covers the monthly payment based on hours worked). Location: The client is based in the United States however the position is 100% remote for nearshore candidates located in Central or South America. Contract/project duration: Initially 6 months with extension possibility based on performance. Time zone and working hours: Full-time Monday to Friday (8 hours per day 40 hours per week) from 8:00 AM to 5:00 PM PST (U.S. time zone). Equipment: Contractors are required to use their own laptop/PC. Start date expectation: As soon as possible. Payment methods: International bank transfer PayPal Wise Payoneer etc. Bertoni Process Steps: Requirements verification video interview. Partner/Client Process Steps: CV review. 1 Technical video interview with our partner. 1 or 2 video interviews with the end client. Why Join Us? Be part of an innovative team shaping the future of technology. Work in a collaborative and inclusive environment. Opportunities for professional development and career growth.
|
|
Lead Data Engineer – (Remote – Latin America)
Bertoni Solutions |
Remote Argentina
|
Company Description We are a multinational team of individuals who believe that with the right knowledge and approach technology is the answer to the challenges businesses face today. Since 2016 we have brought this knowledge and approach to our clients helping them translate technology into their success. With Swiss roots and our own development team in Lima and across the region we offer the best of both cultures: the talent and passion of Latin American professionals combined with the organizational skills and Swiss mindset. Job Description We are seeking a highly skilled Lead Data Engineer with strong expertise in PySpark SQL and Python Azure Data Factory Synapse Databricks and Fabric as well as a solid understanding of ETL and data warehousing end to end principles. The ideal candidate will have a proven track record of designing building and maintaining scalable data pipelines in a collaborative and fast-paced environment. Key Responsibilities: Design and develop scalable data pipelines using PySpark to support analytics and reporting needs. Write efficient SQL and Python code to transform cleanse and optimize large datasets. Collaborate with machine learning engineers product managers and developers to understand data requirements and deliver solutions. Implement and maintain robust ETL processes to integrate structured and semi-structured data from various sources. Ensure data quality integrity and reliability across pipelines and systems. Participate in code reviews troubleshooting and performance tuning. Work independently and proactively to identify and resolve data-related issues. Contribute to Azure-based data solutions including ADF Synapse ADLS and other services. Support cloud migration initiatives and DevOps practices. Provide guidance on best practices and mentor junior team members when needed. Qualifications 8+ years of overall experience working with cross-functional teams (machine learning engineers developers product managers analytics teams). 3+ years of hands-on experience developing and managing data pipelines using PySpark. 3 to 5 years of experience with Azure-native services including Azure Data Lake Storage (ADLS) Azure Data Factory (ADF) Databricks Azure Synapse Analytics / Azure SQL DB / Fabric. Strong programming skills in Python and SQL. Solid experience doing ETL processes and data modeling/data warehousing end to end solutions. Self-driven resourceful and comfortable working in dynamic fast-paced environments. Advanced written and spoken English is a must have for this position (B2 C1 or C2 only). Strong communication skills is a must. Nice to have: Databricks certification. Knowledge of DevOps CI/CD pipelines and cloud migration best practices. Familiarity with Event Hub IoT Hub Azure Stream Analytics Azure Analysis Services and Cosmos DB. Basic understanding of SAP HANA. Intermediate-level experience with Power BI. Additional Information Please note that we will not be moving forward with any applicants who do not meet the following mandatory requirements: 3+ years of experience with PySpark/Python ETL and data warehousing processes Azure data factory Synapse Databricks Azure Data Lake Storage Fabric Azure SQL DB etc. Proven leadership experience in a current project or previous projects/work experiences. Advanced written and spoken English fluency is a MUST HAVE (from B2 level to C1/C2) MUST BE located in Central or South america as this is a nearshore position (Please note that we are not able to consider candidates requiring relocation or those located offshore). More Details: Contract type: Independent contractor (This contract does not include PTO tax deductions or insurance. It only covers the monthly payment based on hours worked). Location: The client is based in the United States however the position is 100% remote for nearshore candidates located in Central or South America. Contract/project duration: Initially 6 months with extension possibility based on performance. Time zone and working hours: Full-time Monday to Friday (8 hours per day 40 hours per week) from 8:00 AM to 5:00 PM PST (U.S. time zone). Equipment: Contractors are required to use their own laptop/PC. Start date expectation: As soon as possible. Payment methods: International bank transfer PayPal Wise Payoneer etc. Bertoni Process Steps: Requirements verification video interview. Partner/Client Process Steps: CV review. 1 Technical video interview with our partner. 1 or 2 video interviews with the end client. Why Join Us? Be part of an innovative team shaping the future of technology. Work in a collaborative and inclusive environment. Opportunities for professional development and career growth.
|
|
Lead, Software Engineer, Data & AI – Capital One Software (Remote)
Capital One |
Richmond, VA
|
Lead Software Engineer Data & AI – Capital One Software (Remote) Capital One Software is seeking an experienced Lead Software Engineer (Data & AI) to join our Rapid Prototyping Team which is responsible for building and prototyping new features of Capital One Software’s next-generation data and cloud security platform to address emerging customer use cases and showcasing key product capabilities. Our software solutions leverage the state-of-art Data & AI technologies and help enterprises discover share and protect their most critical data assets across hybrid and multi-cloud environments. This team plays a crucial role in designing and prototyping new product capabilities evaluating cutting-edge technologies and is on the front line with customers to iterate on new features. We are seeking top tier talent to join our pioneering team and propel us towards our destination. You will be joining a team of innovative product tech and design leaders that tirelessly seek to question the status quo. As a Capital One Lead Software Engineer you’ll have the opportunity to be on the forefront of building this business and bring these tools to market. What You’ll Do Lead a portfolio of diverse technology projects and a team of developers with deep experience in large-scale distributed systems data security and governance and cloud-native architecture. Make impactful hands-on contributions throughout the project lifecycle from scoping design to coding and operations. Share your passion for staying on top of tech trends experimenting with and learning new technologies on Security Data Management Platform Agentic AI Cloud Data Warehouse etc participating in internal & external technology communities mentoring other members of the engineering community. Collaborate with product managers translate product requirements into engineering tasks make trade-off calls to balance delivery timelines and engineering best practices. Own and drive cross functional partnerships with teams spanning Engineering Platform Design Data Science Security and GTM teams and partner closely with the leadership/stakeholders of various teams within these teams. Drive innovations on building AI-enabled agents fine-tune LLM models for data classification and sensitive data detection. Basic Qualifications Bachelor’s Degree At least 4 years of experience in software engineering (Internship experience does not apply) At least 2 year experience with cloud computing (AWS Microsoft Azure or Google Cloud). At least 3 years experience in programming language Java Python C++ Rust or Go. Preferred Qualifications Master’s Degree Prior experience in building Data Security Posture Management (DSPM) system is a huge plus. Proficient in SQL. Hands-on experience with RDBMS and NoSQL databases. Hands-on experience with Container Orchestration services including Docker and Kubernetes. Experience in cloud-based lakehouses such as Databricks Snowflake AWS Athena etc. Experience in open source data governance/catalog platforms such as Datahub OpenMetadata or Atlas is a plus. Hands-on experience in building production scale agentic AI systems. Familiar with tools such as Langchain Langgraph and MCP. Hands-on experience in fine-tuning LLM and deployed to productions to serve customers. 2+ years of experience in Agile practices At this time Capital One will not sponsor a new applicant for employment authorization for this position The minimum and maximum full-time annual salaries for this role are listed below by location. Please note that this salary information is solely for candidates hired to perform work within one of these locations and refers to the amount Capital One is willing to pay at the time of this posting. Salaries for part-time roles will be prorated based upon the agreed upon number of hours to be regularly worked. Remote (Regardless of Location): $175800 - $200700 for Lead Machine Learning Engineer Richmond VA: $175800 - $200700 for Lead Machine Learning Engineer Candidates hired to work in other locations will be subject to the pay range associated with that location and the actual annualized salary amount offered to any candidate at the time of hire will be reflected solely in the candidate’s offer letter. This role is also eligible to earn performance based incentive compensation which may include cash bonus(es) and/or long term incentives (LTI). Incentives could be discretionary or non discretionary depending on the plan. Capital One offers a comprehensive competitive and inclusive set of health financial and other benefits that support your total well-being. Learn more at the Capital One Careers website. Eligibility varies based on full or part-time status exempt or non-exempt status and management level. This role is expected to accept applications for a minimum of 5 business days. No agencies please. Capital One is an equal opportunity employer (EOE including disability/vet) committed to non-discrimination in compliance with applicable federal state and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries including to the extent applicable Article 23-A of the New York Correction Law San Francisco California Police Code Article 49 Sections 4901-4920 New York City’s Fair Chance Act Philadelphia’s Fair Criminal Records Screening Act and other applicable federal state and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position and you require an accommodation please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process please send an email to Careers@capitalone.com Capital One does not provide endorse nor guarantee and is not liable for third-party products services educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC).
|
|
Lead Data Engineer - Athlete (REMOTE)
DICK'S Sporting Goods |
|
At DICK’S Sporting Goods we believe in how positively sports can change lives. On our team everyone plays a critical role in creating confidence and excitement by personally equipping all athletes to achieve their dreams. We are committed to creating an inclusive and diverse workforce reflecting the communities we serve. If you are ready to make a difference as part of the world’s greatest sports team apply to join our team today! Overview OVERVIEW: This team supports a wide array of data sources inbound and outbound. They collaborate with vendors to build and maintain ingress and egress data flows. This data supports marketing functions that do not necessarily relate to others so the ability to go deep and broad quickly is crucial to this role. The Lead Data Engineer will act as a SME for a data domain to support the enterprise's business needs. Design patterns and coordinating solutions with moderate complexity within assigned product team. Data Movement: Design distribution of complex data infrastructure resources and create advanced physical modeling and design services to tune database applications for optimum performance Data Architecture: Drive the modeling of highly complex data structures that allow for easy consumption and analysis. Select the appropriate technology for the implementation of solutions leveraging a clear understanding of industry best practices and trends. Data Set Exploration and Documentation: Drive the analysis documentation and articulation of highly complex datasets while establishing quality and the lineage of the data. Functional/Technical Requirements: Support the collection functional requirements using document analysis and workflow analysis to express the requirements in terms of target user roles and goals. Program/Portfolio Management Support: Contribute to the management of a portfolio of programs while reporting to and in partnership with senior teammates. Ongoing Learning and Development: Act as subject matter expert in an area of technology policy regulation or operational management for the team. Maintain external accreditations and in-depth understanding of current and emerging external regulation and industry best practices through continuing professional development attending conferences and reading specialist media Analysis of Current vs. Future State: Document Current vs. Future state processes and describe the changes required to migrate to the future-state capability to record accurately the change required Infrastructure and Network Development and Maintenance: Design and select business-critical storage data center and client/server environments to design solutions in line with industry best practice and provide a third-line point of escalation for appropriate global infrastructure solutions. Information Security and Compliance: Creates and implements best practices for safely and securely moving data. Configures infrastructure utilizing department standards for access controls firewall rules and storage. Technical Developments Recommendation: Discuss and recommend more complex or innovative solutions to better meet users’ and/or business performance quality needs Operational Compliance: Maintain and renew a deep knowledge and understanding of the organization's policies and procedures and of relevant regulatory codes and codes of conduct and ensure own work adheres to required standards Enterprise Infrastructure Modernization: Drive advances in technologies and architectures to increase the value delivered by technology and digital capabilities either through improvements to the efficiency of technology environment or through those that reduce the total cost of technology operations. Recommend and participate in activities related to the design development and maintenance of the digital capabilities within the enterprise architecture. Horizon Scanning: Explore and develop a detailed understanding of external developments or emerging issues and evaluate their potential impact on or usefulness to the organization. Feasibility Studies: Conduct feasibility studies from a technological and organizational perspective and document findings to complete cost-benefit analysis on implementing changes to business processes products or business unit structure. Qualifications Education: Master's Degree or equivalent level preferred General Experience: Substantial general work experience together with comprehensive job related experience in own area of expertise to fully competent level. (Over 6 years to 10 years) At DICK’S we thrive on innovation and authenticity. That said to protect the integrity and security of our hiring process we ask that candidates do not use AI tools (like ChatGPT or others) during interviews or assessments. To ensure a smooth and secure experience please note the following: Cameras must be on during all virtual interviews. AI tools are not permitted to be used by the candidate during any part of the interview process. Offers are contingent upon a satisfactory background check which may include ID verification. If you have any questions or need accommodations we’re here to help. Thanks for helping us keep the process fair and secure for everyone! Targeted Pay Range: $95200.00 - $158800.00. This is part of a competitive total rewards package that could include other components such as: incentive equity and benefits. Individual pay is determined by a number of factors including experience location internal pay equity and other relevant business considerations. We review all teammate pay regularly to ensure competitive and equitable pay.DICK'S Sporting Goods complies with all state paid leave requirements. We also offer a generous suite of benefits. To learn more visit www.benefityourliferesources.com.
|
|
Lead Cloud Data Engineer - Snowflake & AWS | Remote
Avalara APAC |
|
What You'll Do The Data Science Engineering team is looking for a Lead Cloud Data Engineer to build the data infrastructure for Avalara's core data assets- empowering us with accurate data to lead data-backed decisions. As a Lead Cloud Data Engineer you will help develop our data and reporting infrastructure using Snowflake SSRS Python AWS Services Airflow DBT Modeling and automation. You will influence the implementation of technologies and solutions to solve real challenges. You have deep SQL experience an understanding of modern data stacks and technology experience with data and all things data-related and experience guiding a team through technical and design challenges. You will report into the Sr. Manager Cloud Software Engineering and be a part of the larger Data Engineering team. Responsibilities What Your Responsibilities Will Be Collaborate with teams to understand data requirements and translate them into technical solutions Work with data analysts and data scientists to provide them with clean and structured datasets for analysis and modeling Must be efficient in database and know visualization best practices Data modeling and reporting (ad hoc report generation) techniques Prepare low-level design for project to proceed into implementation and support final go-live Maintain comprehensive documentation of the reporting infrastructure architecture configurations and processes create regular reports on the performance of the pipeline data quality and incidents detected. What You’ll Need To Be Successful Qualifications Bachelor/master's degree in computer science or equivalent 8+ years' experience in data engineering field with deep SQL knowledge. Have proficiency in Snowflake Python AWS Services Advanced SQL and SQL Server Reporting Services (SSRS) is must. 4+ years working with Snowflake and Python 1+ year working with Automation Docker Terraform Container CI/CD and Kubernetes Hands-on experience working with SQL Server Reporting Services (SSRS) Experience of common container and orchestration technologies such as Docker Terraform Container CI/CD and Kubernetes. Familiarity with cloud platforms such as AWS GCP and experience with cloud-based data solutions. Experience communicating updates and resolutions to customers and other partners (verbal/written) to deliver the technical insights and interpret the data reports to the clients. Help in understanding and serving to your client's requirements and creating technical documentation. Important Traits required: Technically sound Leadership Experience communicating updates and resolutions to customers and other partners Problem Solving Accountability Collaboration and Data-driven. Good To Have Certificate in Snowflake is plus Relevant certifications in data warehousing or cloud platform. Hands-on experience in Grafana-Prometheus Experience architecting complex data marts leveraging DBT and Airflow. Technologies You Are Likely To Be Working With Snowflake Python Cloud Computing AWS RDBMS Automation SQL Server Reporting Services (SSRS)and Mongo. Good to have: Airflow orchestration technologies such as Docker Container CI/CD and Kubernetes GCP and reporting Presentation layer. How We’ll Take Care Of You Total Rewards In addition to a great compensation package paid time off and paid parental leave many Avalara employees are eligible for bonuses. Health & Wellness Benefits vary by location but generally include private medical life and disability insurance. Inclusive culture and diversity Avalara strongly supports diversity equity and inclusion and is committed to integrating them into our business practices and our organizational culture. We also have a total of 8 employee-run resource groups each with senior leadership and exec sponsorship. Learn more about our benefits by region here: Avalara North America What You Need To Know About Avalara We’re Avalara. We’re defining the relationship between tax and tech. We’ve already built an industry-leading cloud compliance platform processing nearly 40 billion customer API calls and over 5 million tax returns a year and this year we became a billion-dollar business . Our growth is real and we’re not slowing down until we’ve achieved our mission - to be part of every transaction in the world. We’re bright innovative and disruptive like the orange we love to wear. It captures our quirky spirit and optimistic mindset. It shows off the culture we’ve designed that empowers our people to win. Ownership and achievement go hand in hand here. We instill passion in our people through the trust we place in them. We’ve been different from day one. Join us and your career will be too. We’re An Equal Opportunity Employer Supporting diversity and inclusion is a cornerstone of our company — we don’t want people to fit into our culture but to enrich it. All qualified candidates will receive consideration for employment without regard to race color creed religion age gender national orientation disability sexual orientation US Veteran status or any other factor protected by law. If you require any reasonable adjustments during the recruitment process please let us know.
|
|
Lead, Software Engineer, Data & AI – Capital One Software (Remote)
Capital One |
Richmond, VA
|
Lead Software Engineer Data & AI – Capital One Software (Remote) Job Description Capital One Software is seeking an experienced Lead Software Engineer (Data & AI) to join our Rapid Prototyping Team which is responsible for building and prototyping new features of Capital One Software’s next-generation data and cloud security platform to address emerging customer use cases and showcasing key product capabilities. Our software solutions leverage the state-of-art Data & AI technologies and help enterprises discover share and protect their most critical data assets across hybrid and multi-cloud environments. This team plays a crucial role in designing and prototyping new product capabilities evaluating cutting-edge technologies and is on the front line with customers to iterate on new features. We are seeking top tier talent to join our pioneering team and propel us towards our destination. You will be joining a team of innovative product tech and design leaders that tirelessly seek to question the status quo. As a Capital One Lead Software Engineer you’ll have the opportunity to be on the forefront of building this business and bring these tools to market. What You’ll Do Lead a portfolio of diverse technology projects and a team of developers with deep experience in large-scale distributed systems data security and governance and cloud-native architecture. Make impactful hands-on contributions throughout the project lifecycle from scoping design to coding and operations. Share your passion for staying on top of tech trends experimenting with and learning new technologies on Security Data Management Platform Agentic AI Cloud Data Warehouse etc participating in internal & external technology communities mentoring other members of the engineering community. Collaborate with product managers translate product requirements into engineering tasks make trade-off calls to balance delivery timelines and engineering best practices. Own and drive cross functional partnerships with teams spanning Engineering Platform Design Data Science Security and GTM teams and partner closely with the leadership/stakeholders of various teams within these teams. Drive innovations on building AI-enabled agents fine-tune LLM models for data classification and sensitive data detection. Basic Qualifications Bachelor’s Degree At least 4 years of experience in software engineering (Internship experience does not apply) At least 2 year experience with cloud computing (AWS Microsoft Azure or Google Cloud). At least 3 years experience in programming language Java Python C++ Rust or Go. Preferred Qualifications Master’s Degree Prior experience in building Data Security Posture Management (DSPM) system is a huge plus. Proficient in SQL. Hands-on experience with RDBMS and NoSQL databases. Hands-on experience with Container Orchestration services including Docker and Kubernetes. Experience in cloud-based lakehouses such as Databricks Snowflake AWS Athena etc. Experience in open source data governance/catalog platforms such as Datahub OpenMetadata or Atlas is a plus. Hands-on experience in building production scale agentic AI systems. Familiar with tools such as Langchain Langgraph and MCP. Hands-on experience in fine-tuning LLM and deployed to productions to serve customers. 2+ years of experience in Agile practices At this time Capital One will not sponsor a new applicant for employment authorization for this position The minimum and maximum full-time annual salaries for this role are listed below by location. Please note that this salary information is solely for candidates hired to perform work within one of these locations and refers to the amount Capital One is willing to pay at the time of this posting. Salaries for part-time roles will be prorated based upon the agreed upon number of hours to be regularly worked. Remote (Regardless of Location): $175800 - $200700 for Manager Software Engineering Richmond VA: $175800 - $200700 for Manager Software Engineering Candidates hired to work in other locations will be subject to the pay range associated with that location and the actual annualized salary amount offered to any candidate at the time of hire will be reflected solely in the candidate’s offer letter. This role is also eligible to earn performance based incentive compensation which may include cash bonus(es) and/or long term incentives (LTI). Incentives could be discretionary or non discretionary depending on the plan. Capital One offers a comprehensive competitive and inclusive set of health financial and other benefits that support your total well-being. Learn more at the Capital One Careers website. Eligibility varies based on full or part-time status exempt or non-exempt status and management level. This role is expected to accept applications for a minimum of 5 business days. No agencies please. Capital One is an equal opportunity employer (EOE including disability/vet) committed to non-discrimination in compliance with applicable federal state and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries including to the extent applicable Article 23-A of the New York Correction Law San Francisco California Police Code Article 49 Sections 4901-4920 New York City’s Fair Chance Act Philadelphia’s Fair Criminal Records Screening Act and other applicable federal state and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position and you require an accommodation please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process please send an email to Careers@capitalone.com Capital One does not provide endorse nor guarantee and is not liable for third-party products services educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC).
|
|
Lead Data Engineer – Informatica IICS(8+Years , Remote )
Tech-assist |
Remote
|
Job Title: Lead Data Engineer – Informatica IICS Location: Remote (USA) – Occasional travel to client site if required Job Type: Contract / Full-time Experience Required: 8+ Years Work Shift: General (US Hours) Job Overview: We are looking for a seasoned and results-driven Lead Data Engineer with deep expertise in Informatica Intelligent Cloud Services (IICS) and data integration technologies. The ideal candidate will have strong hands-on experience in IICS CAI/CDI modules system administration and cloud environments such as Azure or AWS. Job Description The job duties and requirements are defined for the role of Informatica IICS data engineer. ▪ The senior role provides technical leadership and mentorship to junior team members. ▪ The candidate should have relevant experience working in at least 2 to 4 end to end project involving IICS ▪ This position ensures the performance of all duties in accordance with the company’s policies and procedures all U.S. state and federal laws and regulations wherein the company operates Job Specification / Skills and Competencies Minimum 5+ years' experience with Informatica Data Management Cloud (IDMC) particularly with Cloud Data Integration (CDI) Cloud Application Integration (CAI) Cloud Mass Ingestion (CMI) and Cloud Integration Hub (CIH) Data Quality API Management 2+ years hands-on experience with CAI - Processes Service Connectors Process Objects – developing business process automation Must have working knowledge in handling various source/target systems including API’s. Create and test complex mapping tasks and task flows debug issues and implement performance optimization techniques. Collaborate with cross-functional teams including business analysts architects and data engineers to deliver integrated data solutions. Perform administrator activities like user account management setting up permissions creating connections metering license upgrades product upgrades Strong understanding of data integration ETL processes data warehousing and cloud technologies. Establish and enforce data quality standards and data validation processes. Conduct testing and quality assurance of data integration solutions to identify and resolve issues. Practical experience in both OnPrem and Cloud databases (SQLNoSQL etc) and Streaming platforms like Kafka is desirable. Fundamental understanding of Cloud ecosystems like AWS Azure or GCP Job Types: Full-time Contract Pay: $65.00 - $80.00 per hour Application Question(s): How many Years of Exp do you have in Informatica IICS Data Engineer? Do You have Exp in (IDMC) (CDI) (CAI) (CMI) and (CIH)? Education: Bachelor's (Preferred) Experience: Informatica: 5 years (Required) Data Engineer: 5 years (Required) Work Location: Remote
|
|
Lead Data Engineer - 100% Remote
Zeektek |
|
Job Title: Data Engineer – SQL / ETL (Teradata or Snowflake) Location: Remote (U.S. Based) Type: 12-Month Contract Industry: Healthcare / Data Engineering Team: DnA DataExchange – 17-person collaborative team with a service-first mindset Job Description: We are seeking a self-motivated analytical Data Engineer with a strong background in SQL development and ETL pipeline design to join our DataExchange team within the DnA Business Unit. This is a 12-month contract with potential to work on high-impact healthcare data initiatives. The ideal candidate has 5+ years of hands-on experience in Teradata or Snowflake environments excels in data analysis and quality validation and can independently own and deliver data engineering solutions with minimal oversight. You’ll work closely with data analysts developers and external partners to design and deliver complex data extracts resolve data quality issues and contribute to the long-term success of our cloud data infrastructure. Responsibilities: Translate business and data analyst requirements into robust ETL workflows Build and optimize SQL-based data pipelines using Teradata or Snowflake Analyze data sets and investigate data quality issues or anomalies Communicate findings and coordinate resolutions with both internal teams and external vendors Write and troubleshoot complex SQL queries to support healthcare data extracts Review and interpret complex data mappings to support developers and improve data accuracy Participate in weekly progress check-ins with your manager and team Must-Have Qualifications: Bachelor's degree in Computer Science Engineering or related field 5+ years of professional experience as a Data Engineer or SQL Developer Expertise in Teradata or Snowflake and complex SQL query development Strong understanding of ETL concepts with hands-on experience using Talend or Informatica Excellent problem-solving skills and data analysis experience Ability to work independently and own deliverables end-to-end Clear proactive communication skills Nice-to-Haves: Previous experience with healthcare datasets Background in cloud-based data tools and modern data warehouse environments Experience with Talend Informatica or other ETL platforms What You’ll Gain: The opportunity to work on meaningful projects that support member and provider health outcomes Exposure to cutting-edge technologies like Snowflake cloud tools and scalable ETL architectures Collaborative team environment with a strong mentoring culture Flexibility and autonomy in a remote-friendly role Disqualifier: No practical experience with SQL
|
|
Lead Data Engineer (Remote)
Circana UK |
Remote United Kingdom
|
At Circana we are fueled by our passion for continuous learning and growth we seek and share feedback freely and we celebrate victories both big and small in an environment that is flexible and accommodating to our work and personal lives. We have a global commitment to diversity equity and inclusion as we believe in the undeniable strength that diversity brings to our business employees clients and communities. With us you can always bring your full self to work. Join our inclusive committed team to be a challenger own outcomes and stay curious together. Circana is proud to be Certified by Great Place To Work. This prestigious award is based entirely on what current employees say about their experience working at Circana. Learn more at www.circana.com. What will you be doing? We are seeking a skilled and motivated Data Engineer to join a growing team Global Team based in the UK. In this role you will be responsible for designing building and maintaining robust data pipelines and infrastructure on the Azure cloud platform. You will leverage your expertise in PySpark Apache Spark and Apache Airflow to process and orchestrate large-scale data workloads ensuring data quality efficiency and scalability. If you have a passion for data engineering and a desire to make a significant impact we encourage you to apply! Job Responsibilities Data Engineering & Data Pipeline Development Design develop and optimize scalable DATA workflows using Python PySpark and Airflow Implement real-time and batch data processing using Spark Enforce best practices for data quality governance and security throughout the data lifecycle Ensure data availability reliability and performance through monitoring and automation. Cloud Data Engineering : Manage cloud infrastructure and cost optimization for data processing workloads Implement CI/CD pipelines for data workflows to ensure smooth and reliable deployments. Big Data & Analytics: Build and optimize large-scale data processing pipelines using Apache Spark and PySpark Implement data partitioning caching and performance tuning for Spark-based workloads. Work with diverse data formats (structured and unstructured) to support advanced analytics and machine learning initiatives. Workflow Orchestration (Airflow) Design and maintain DAGs (Directed Acyclic Graphs) in Airflow to automate complex data workflows Monitor troubleshoot and optimize job execution and dependencies Team Leadership & Collaboration Lead a team of data engineers providing technical guidance and mentorship Foster a collaborative environment and promote best practices for coding standards version control and documentation. Requirements This a client facing role strong communication and collaboration skills are vital Experience in data engineering with expertise in Azure PySpark Spark and Airflow. Strong programming skills in Python SQL with the ability to write efficient and maintainable code Deep understanding of Spark internals (RDDs DataFrames DAG execution partitioning etc.) Experience with Airflow DAGs scheduling and dependency management Knowledge of Git Docker Kubernetes Terraform and apply best practices of DevOps for CI/CD workflows Excellent problem-solving skills and ability to optimize large-scale data processing. Experience in leading teams and working in Agile/Scrum environments A proven track record of working effectively global remote teams Desirable: Experience with data modelling and data warehousing concepts Familiarity with data visualization tools and techniques Knowledge of machine learning algorithms and frameworks Circana Behaviours As well as the technical skills experience and attributes that are required for the role our shared behaviours sit at the core of our organization. Therefore we always look for people who can continuously champion these behaviours throughout the business within their day-to-day role: Stay Curious: Being hungry to learn and grow always asking the big questions. Seek Clarity: Embracing complexity to create clarity and inspire action. Own the Outcome: Being accountable for decisions and taking ownership of our choices. Centre on the Client: Relentlessly adding value for our customers. Be a Challenger: Never complacent always striving for continuous improvement. Champion Inclusivity: Fostering trust in relationships engaging with empathy respect and integrity. Commit to each other: Contributing to making Circana a great place to work for everyone. Location This position can be located in the following area(s): Remote or Bracknell UK #LI-KM1
|
* unlock: sign-up / login and use the searches from your home page
** job listings updated in real time 🔥
Login & search by other job titles, a specific location or any keyword.
Powerful custom searches are available once you login.