Job Title | Location | Description | Posted** |
---|---|---|---|
Lead Data Engineer – (Remote – Latin America)
Bertoni Solutions |
Remote Brazil
|
Company Description We are a multinational team of individuals who believe that with the right knowledge and approach technology is the answer to the challenges businesses face today. Since 2016 we have brought this knowledge and approach to our clients helping them translate technology into their success. With Swiss roots and our own development team in Lima and across the region we offer the best of both cultures: the talent and passion of Latin American professionals combined with the organizational skills and Swiss mindset. Job Description We are seeking a highly skilled Lead Data Engineer with strong expertise in PySpark SQL and Python Azure Data Factory Synapse Databricks and Fabric as well as a solid understanding of ETL and data warehousing end to end principles. The ideal candidate will have a proven track record of designing building and maintaining scalable data pipelines in a collaborative and fast-paced environment. Key Responsibilities: Design and develop scalable data pipelines using PySpark to support analytics and reporting needs. Write efficient SQL and Python code to transform cleanse and optimize large datasets. Collaborate with machine learning engineers product managers and developers to understand data requirements and deliver solutions. Implement and maintain robust ETL processes to integrate structured and semi-structured data from various sources. Ensure data quality integrity and reliability across pipelines and systems. Participate in code reviews troubleshooting and performance tuning. Work independently and proactively to identify and resolve data-related issues. Contribute to Azure-based data solutions including ADF Synapse ADLS and other services. Support cloud migration initiatives and DevOps practices. Provide guidance on best practices and mentor junior team members when needed. Qualifications 8+ years of overall experience working with cross-functional teams (machine learning engineers developers product managers analytics teams). 3+ years of hands-on experience developing and managing data pipelines using PySpark. 3 to 5 years of experience with Azure-native services including Azure Data Lake Storage (ADLS) Azure Data Factory (ADF) Databricks Azure Synapse Analytics / Azure SQL DB / Fabric. Strong programming skills in Python and SQL. Solid experience doing ETL processes and data modeling/data warehousing end to end solutions. Self-driven resourceful and comfortable working in dynamic fast-paced environments. Advanced written and spoken English is a must have for this position (B2 C1 or C2 only). Strong communication skills is a must. Nice to have: Databricks certification. Knowledge of DevOps CI/CD pipelines and cloud migration best practices. Familiarity with Event Hub IoT Hub Azure Stream Analytics Azure Analysis Services and Cosmos DB. Basic understanding of SAP HANA. Intermediate-level experience with Power BI. Additional Information Please note that we will not be moving forward with any applicants who do not meet the following mandatory requirements: 3+ years of experience with PySpark/Python ETL and data warehousing processes Azure data factory Synapse Databricks Azure Data Lake Storage Fabric Azure SQL DB etc. Proven leadership experience in a current project or previous projects/work experiences. Advanced written and spoken English fluency is a MUST HAVE (from B2 level to C1/C2) MUST BE located in Central or South america as this is a nearshore position (Please note that we are not able to consider candidates requiring relocation or those located offshore). More Details: Contract type: Independent contractor (This contract does not include PTO tax deductions or insurance. It only covers the monthly payment based on hours worked). Location: The client is based in the United States however the position is 100% remote for nearshore candidates located in Central or South America. Contract/project duration: Initially 6 months with extension possibility based on performance. Time zone and working hours: Full-time Monday to Friday (8 hours per day 40 hours per week) from 8:00 AM to 5:00 PM PST (U.S. time zone). Equipment: Contractors are required to use their own laptop/PC. Start date expectation: As soon as possible. Payment methods: International bank transfer PayPal Wise Payoneer etc. Bertoni Process Steps: Requirements verification video interview. Partner/Client Process Steps: CV review. 1 Technical video interview with our partner. 1 or 2 video interviews with the end client. Why Join Us? Be part of an innovative team shaping the future of technology. Work in a collaborative and inclusive environment. Opportunities for professional development and career growth.
|
|
Lead Data Engineer – (Remote – Latin America)
Bertoni Solutions |
|
Company Description We are a multinational team of individuals who believe that with the right knowledge and approach technology is the answer to the challenges businesses face today. Since 2016 we have brought this knowledge and approach to our clients helping them translate technology into their success. With Swiss roots and our own development team in Lima and across the region we offer the best of both cultures: the talent and passion of Latin American professionals combined with the organizational skills and Swiss mindset. Job Description We are seeking a highly skilled Lead Data Engineer with strong expertise in PySpark SQL and Python Azure Data Factory Synapse Databricks and Fabric as well as a solid understanding of ETL and data warehousing end to end principles. The ideal candidate will have a proven track record of designing building and maintaining scalable data pipelines in a collaborative and fast-paced environment. Key Responsibilities: Design and develop scalable data pipelines using PySpark to support analytics and reporting needs. Write efficient SQL and Python code to transform cleanse and optimize large datasets. Collaborate with machine learning engineers product managers and developers to understand data requirements and deliver solutions. Implement and maintain robust ETL processes to integrate structured and semi-structured data from various sources. Ensure data quality integrity and reliability across pipelines and systems. Participate in code reviews troubleshooting and performance tuning. Work independently and proactively to identify and resolve data-related issues. Contribute to Azure-based data solutions including ADF Synapse ADLS and other services. Support cloud migration initiatives and DevOps practices. Provide guidance on best practices and mentor junior team members when needed. Qualifications 8+ years of overall experience working with cross-functional teams (machine learning engineers developers product managers analytics teams). 3+ years of hands-on experience developing and managing data pipelines using PySpark. 3 to 5 years of experience with Azure-native services including Azure Data Lake Storage (ADLS) Azure Data Factory (ADF) Databricks Azure Synapse Analytics / Azure SQL DB / Fabric. Strong programming skills in Python and SQL. Solid experience doing ETL processes and data modeling/data warehousing end to end solutions. Self-driven resourceful and comfortable working in dynamic fast-paced environments. Advanced written and spoken English is a must have for this position (B2 C1 or C2 only). Strong communication skills is a must. Nice to have: Databricks certification. Knowledge of DevOps CI/CD pipelines and cloud migration best practices. Familiarity with Event Hub IoT Hub Azure Stream Analytics Azure Analysis Services and Cosmos DB. Basic understanding of SAP HANA. Intermediate-level experience with Power BI. Additional Information Please note that we will not be moving forward with any applicants who do not meet the following mandatory requirements: 3+ years of experience with PySpark/Python ETL and data warehousing processes Azure data factory Synapse Databricks Azure Data Lake Storage Fabric Azure SQL DB etc. Proven leadership experience in a current project or previous projects/work experiences. Advanced written and spoken English fluency is a MUST HAVE (from B2 level to C1/C2) MUST BE located in Central or South america as this is a nearshore position (Please note that we are not able to consider candidates requiring relocation or those located offshore). More Details: Contract type: Independent contractor (This contract does not include PTO tax deductions or insurance. It only covers the monthly payment based on hours worked). Location: The client is based in the United States however the position is 100% remote for nearshore candidates located in Central or South America. Contract/project duration: Initially 6 months with extension possibility based on performance. Time zone and working hours: Full-time Monday to Friday (8 hours per day 40 hours per week) from 8:00 AM to 5:00 PM PST (U.S. time zone). Equipment: Contractors are required to use their own laptop/PC. Start date expectation: As soon as possible. Payment methods: International bank transfer PayPal Wise Payoneer etc. Bertoni Process Steps: Requirements verification video interview. Partner/Client Process Steps: CV review. 1 Technical video interview with our partner. 1 or 2 video interviews with the end client. Why Join Us? Be part of an innovative team shaping the future of technology. Work in a collaborative and inclusive environment. Opportunities for professional development and career growth.
|
|
Lead, Software Engineer, Data & AI – Capital One Software (Remote)
Capital One |
Richmond, VA
|
Lead Software Engineer Data & AI – Capital One Software (Remote) Job Description Capital One Software is seeking an experienced Lead Software Engineer (Data & AI) to join our Rapid Prototyping Team which is responsible for building and prototyping new features of Capital One Software’s next-generation data and cloud security platform to address emerging customer use cases and showcasing key product capabilities. Our software solutions leverage the state-of-art Data & AI technologies and help enterprises discover share and protect their most critical data assets across hybrid and multi-cloud environments. This team plays a crucial role in designing and prototyping new product capabilities evaluating cutting-edge technologies and is on the front line with customers to iterate on new features. We are seeking top tier talent to join our pioneering team and propel us towards our destination. You will be joining a team of innovative product tech and design leaders that tirelessly seek to question the status quo. As a Capital One Lead Software Engineer you’ll have the opportunity to be on the forefront of building this business and bring these tools to market. What You’ll Do Lead a portfolio of diverse technology projects and a team of developers with deep experience in large-scale distributed systems data security and governance and cloud-native architecture. Make impactful hands-on contributions throughout the project lifecycle from scoping design to coding and operations. Share your passion for staying on top of tech trends experimenting with and learning new technologies on Security Data Management Platform Agentic AI Cloud Data Warehouse etc participating in internal & external technology communities mentoring other members of the engineering community. Collaborate with product managers translate product requirements into engineering tasks make trade-off calls to balance delivery timelines and engineering best practices. Own and drive cross functional partnerships with teams spanning Engineering Platform Design Data Science Security and GTM teams and partner closely with the leadership/stakeholders of various teams within these teams. Drive innovations on building AI-enabled agents fine-tune LLM models for data classification and sensitive data detection. Basic Qualifications Bachelor’s Degree At least 4 years of experience in software engineering (Internship experience does not apply) At least 2 year experience with cloud computing (AWS Microsoft Azure or Google Cloud). At least 3 years experience in programming language Java Python C++ Rust or Go. Preferred Qualifications Master’s Degree Prior experience in building Data Security Posture Management (DSPM) system is a huge plus. Proficient in SQL. Hands-on experience with RDBMS and NoSQL databases. Hands-on experience with Container Orchestration services including Docker and Kubernetes. Experience in cloud-based lakehouses such as Databricks Snowflake AWS Athena etc. Experience in open source data governance/catalog platforms such as Datahub OpenMetadata or Atlas is a plus. Hands-on experience in building production scale agentic AI systems. Familiar with tools such as Langchain Langgraph and MCP. Hands-on experience in fine-tuning LLM and deployed to productions to serve customers. 2+ years of experience in Agile practices At this time Capital One will not sponsor a new applicant for employment authorization for this position The minimum and maximum full-time annual salaries for this role are listed below by location. Please note that this salary information is solely for candidates hired to perform work within one of these locations and refers to the amount Capital One is willing to pay at the time of this posting. Salaries for part-time roles will be prorated based upon the agreed upon number of hours to be regularly worked. Remote (Regardless of Location): $175800 - $200700 for Manager Software Engineering Richmond VA: $175800 - $200700 for Manager Software Engineering Candidates hired to work in other locations will be subject to the pay range associated with that location and the actual annualized salary amount offered to any candidate at the time of hire will be reflected solely in the candidate’s offer letter. This role is also eligible to earn performance based incentive compensation which may include cash bonus(es) and/or long term incentives (LTI). Incentives could be discretionary or non discretionary depending on the plan. Capital One offers a comprehensive competitive and inclusive set of health financial and other benefits that support your total well-being. Learn more at the Capital One Careers website. Eligibility varies based on full or part-time status exempt or non-exempt status and management level. This role is expected to accept applications for a minimum of 5 business days. No agencies please. Capital One is an equal opportunity employer (EOE including disability/vet) committed to non-discrimination in compliance with applicable federal state and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries including to the extent applicable Article 23-A of the New York Correction Law San Francisco California Police Code Article 49 Sections 4901-4920 New York City’s Fair Chance Act Philadelphia’s Fair Criminal Records Screening Act and other applicable federal state and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position and you require an accommodation please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process please send an email to Careers@capitalone.com Capital One does not provide endorse nor guarantee and is not liable for third-party products services educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC).
|
|
Lead, Software Engineer, Data & AI – Capital One Software (Remote)
Capital One |
Richmond, VA
|
Category Engineering Experience Manager Primary Address Richmond Virginia Overview Lead Software Engineer Data & AI – Capital One Software (Remote) Job Description Capital One Software is seeking an experienced Lead Software Engineer (Data & AI) to join our Rapid Prototyping Team which is responsible for building and prototyping new features of Capital One Software’s next-generation data and cloud security platform to address emerging customer use cases and showcasing key product capabilities. Our software solutions leverage the state-of-art Data & AI technologies and help enterprises discover share and protect their most critical data assets across hybrid and multi-cloud environments. This team plays a crucial role in designing and prototyping new product capabilities evaluating cutting-edge technologies and is on the front line with customers to iterate on new features. We are seeking top tier talent to join our pioneering team and propel us towards our destination. You will be joining a team of innovative product tech and design leaders that tirelessly seek to question the status quo. As a Capital One Lead Software Engineer you’ll have the opportunity to be on the forefront of building this business and bring these tools to market. What You’ll Do Lead a portfolio of diverse technology projects and a team of developers with deep experience in large-scale distributed systems data security and governance and cloud-native architecture. Make impactful hands-on contributions throughout the project lifecycle from scoping design to coding and operations. Share your passion for staying on top of tech trends experimenting with and learning new technologies on Security Data Management Platform Agentic AI Cloud Data Warehouse etc participating in internal & external technology communities mentoring other members of the engineering community. Collaborate with product managers translate product requirements into engineering tasks make trade-off calls to balance delivery timelines and engineering best practices. Own and drive cross functional partnerships with teams spanning Engineering Platform Design Data Science Security and GTM teams and partner closely with the leadership/stakeholders of various teams within these teams. Drive innovations on building AI-enabled agents fine-tune LLM models for data classification and sensitive data detection. Basic Qualifications Bachelor’s Degree At least 4 years of experience in software engineering (Internship experience does not apply) At least 2 year experience with cloud computing (AWS Microsoft Azure or Google Cloud). At least 3 years experience in programming language Java Python C++ Rust or Go. Preferred Qualifications Master’s Degree Prior experience in building Data Security Posture Management (DSPM) system is a huge plus. Proficient in SQL. Hands-on experience with RDBMS and NoSQL databases. Hands-on experience with Container Orchestration services including Docker and Kubernetes. Experience in cloud-based lakehouses such as Databricks Snowflake AWS Athena etc. Experience in open source data governance/catalog platforms such as Datahub OpenMetadata or Atlas is a plus. Hands-on experience in building production scale agentic AI systems. Familiar with tools such as Langchain Langgraph and MCP. Hands-on experience in fine-tuning LLM and deployed to productions to serve customers. 2+ years of experience in Agile practices At this time Capital One will not sponsor a new applicant for employment authorization for this position The minimum and maximum full-time annual salaries for this role are listed below by location. Please note that this salary information is solely for candidates hired to perform work within one of these locations and refers to the amount Capital One is willing to pay at the time of this posting. Salaries for part-time roles will be prorated based upon the agreed upon number of hours to be regularly worked. Remote (Regardless of Location): $175800 - $200700 for Manager Software Engineering Richmond VA: $175800 - $200700 for Manager Software Engineering Candidates hired to work in other locations will be subject to the pay range associated with that location and the actual annualized salary amount offered to any candidate at the time of hire will be reflected solely in the candidate’s offer letter. This role is also eligible to earn performance based incentive compensation which may include cash bonus(es) and/or long term incentives (LTI). Incentives could be discretionary or non discretionary depending on the plan. Capital One offers a comprehensive competitive and inclusive set of health financial and other benefits that support your total well-being. Learn more at theCapital One Careers website. Eligibility varies based on full or part-time status exempt or non-exempt status and management level. This role is expected to accept applications for a minimum of 5 business days. No agencies please. Capital One is an equal opportunity employer (EOE including disability/vet) committed to non-discrimination in compliance with applicable federal state and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries including to the extent applicable Article 23-A of the New York Correction Law San Francisco California Police Code Article 49 Sections 4901-4920 New York City’s Fair Chance Act Philadelphia’s Fair Criminal Records Screening Act and other applicable federal state and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position and you require an accommodation please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process please send an email to Careers@capitalone.com Capital One does not provide endorse nor guarantee and is not liable for third-party products services educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC). This carousel contains a column of headings. Selecting a heading will change the main content in the carousel that follows. Use the Previous and Next buttons to cycle through all the options use Enter to select. This carousel shows one item at a time. Use the preceding navigation carousel to select a specific heading to display the content here. How We Hire We take finding great coworkers pretty seriously. Step 1 Apply It only takes a few minutes to complete our application and assessment. Step 2 Screen and Schedule If your application is a good match you’ll hear from one of our recruiters to set up a screening interview. Step 3 Interview(s) Now’s your chance to learn about the job show us who you are share why you would be a great addition to the team and determine if Capital One is the place for you. Step 4 Decision The team will discuss — if it’s a good fit for us and you we’ll make it official! How to Pick the Perfect Career Opportunity Overwhelmed by a tough career choice? Read these tips from Devon Rollins Senior Director of Cyber Intelligence to help you accept the right offer with confidence. Your wellbeing is our priority Our benefits and total compensation package is designed for the whole person. Caring for both you and your family. #### Healthy Body Healthy Mind You have options and we have the tools to help you decide which health plans best fit your needs. #### Save Money Make Money Secure your present plan for your future and reduce expenses along the way. #### Time Family and Advice Options for your time opportunities for your family and advice along the way. It’s time to BeWell. Career Journey Here’s how the team fits together. We’re big on growth and knowing who and how coworkers can best support you.
|
|
Lead Data Engineer – Informatica IICS(8+Years , Remote )
Tech-assist |
Remote
|
Job Title: Lead Data Engineer – Informatica IICS Location: Remote (USA) – Occasional travel to client site if required Job Type: Contract / Full-time Experience Required: 8+ Years Work Shift: General (US Hours) Job Overview: We are looking for a seasoned and results-driven Lead Data Engineer with deep expertise in Informatica Intelligent Cloud Services (IICS) and data integration technologies. The ideal candidate will have strong hands-on experience in IICS CAI/CDI modules system administration and cloud environments such as Azure or AWS. Job Description The job duties and requirements are defined for the role of Informatica IICS data engineer. ▪ The senior role provides technical leadership and mentorship to junior team members. ▪ The candidate should have relevant experience working in at least 2 to 4 end to end project involving IICS ▪ This position ensures the performance of all duties in accordance with the company’s policies and procedures all U.S. state and federal laws and regulations wherein the company operates Job Specification / Skills and Competencies Minimum 5+ years' experience with Informatica Data Management Cloud (IDMC) particularly with Cloud Data Integration (CDI) Cloud Application Integration (CAI) Cloud Mass Ingestion (CMI) and Cloud Integration Hub (CIH) Data Quality API Management 2+ years hands-on experience with CAI - Processes Service Connectors Process Objects – developing business process automation Must have working knowledge in handling various source/target systems including API’s. Create and test complex mapping tasks and task flows debug issues and implement performance optimization techniques. Collaborate with cross-functional teams including business analysts architects and data engineers to deliver integrated data solutions. Perform administrator activities like user account management setting up permissions creating connections metering license upgrades product upgrades Strong understanding of data integration ETL processes data warehousing and cloud technologies. Establish and enforce data quality standards and data validation processes. Conduct testing and quality assurance of data integration solutions to identify and resolve issues. Practical experience in both OnPrem and Cloud databases (SQLNoSQL etc) and Streaming platforms like Kafka is desirable. Fundamental understanding of Cloud ecosystems like AWS Azure or GCP Job Types: Full-time Contract Pay: $65.00 - $80.00 per hour Application Question(s): How many Years of Exp do you have in Informatica IICS Data Engineer? Do You have Exp in (IDMC) (CDI) (CAI) (CMI) and (CIH)? Education: Bachelor's (Preferred) Experience: Informatica: 5 years (Required) Data Engineer: 5 years (Required) Work Location: Remote
|
|
Lead Data Engineer – Informatica IICS(8+Years , Remote )
Tech-assist |
Remote United States
|
Job Title: Lead Data Engineer – Informatica IICS Location: Remote (USA) – Occasional travel to client site if required Job Type: Contract / Full-time Experience Required: 8+ Years Work Shift: General (US Hours) Job Overview: We are looking for a seasoned and results-driven Lead Data Engineer with deep expertise in Informatica Intelligent Cloud Services (IICS) and data integration technologies. The ideal candidate will have strong hands-on experience in IICS CAI/CDI modules system administration and cloud environments such as Azure or AWS. Job Description The job duties and requirements are defined for the role of Informatica IICS data engineer. ▪ The senior role provides technical leadership and mentorship to junior team members. ▪ The candidate should have relevant experience working in at least 2 to 4 end to end project involving IICS ▪ This position ensures the performance of all duties in accordance with the company’s policies and procedures all U.S. state and federal laws and regulations wherein the company operates Job Specification / Skills and Competencies Minimum 5+ years' experience with Informatica Data Management Cloud (IDMC) particularly with Cloud Data Integration (CDI) Cloud Application Integration (CAI) Cloud Mass Ingestion (CMI) and Cloud Integration Hub (CIH) Data Quality API Management 2+ years hands-on experience with CAI - Processes Service Connectors Process Objects – developing business process automation Must have working knowledge in handling various source/target systems including API’s. Create and test complex mapping tasks and task flows debug issues and implement performance optimization techniques. Collaborate with cross-functional teams including business analysts architects and data engineers to deliver integrated data solutions. Perform administrator activities like user account management setting up permissions creating connections metering license upgrades product upgrades Strong understanding of data integration ETL processes data warehousing and cloud technologies. Establish and enforce data quality standards and data validation processes. Conduct testing and quality assurance of data integration solutions to identify and resolve issues. Practical experience in both OnPrem and Cloud databases (SQLNoSQL etc) and Streaming platforms like Kafka is desirable. Fundamental understanding of Cloud ecosystems like AWS Azure or GCP Job Types: Full-time Contract Pay: $65.00 - $80.00 per hour Application Question(s): How many Years of Exp do you have in Informatica IICS Data Engineer? Do You have Exp in (IDMC) (CDI) (CAI) (CMI) and (CIH)? Education: Bachelor's (Preferred) Experience: Informatica: 5 years (Required) Data Engineer: 5 years (Required) Work Location: Remote
|
|
Lead Data Engineer - 100% Remote
Zeektek |
|
Job Title: Data Engineer – SQL / ETL (Teradata or Snowflake) Location: Remote (U.S. Based) Type: 12-Month Contract Industry: Healthcare / Data Engineering Team: DnA DataExchange – 17-person collaborative team with a service-first mindset Job Description: We are seeking a self-motivated analytical Data Engineer with a strong background in SQL development and ETL pipeline design to join our DataExchange team within the DnA Business Unit. This is a 12-month contract with potential to work on high-impact healthcare data initiatives. The ideal candidate has 5+ years of hands-on experience in Teradata or Snowflake environments excels in data analysis and quality validation and can independently own and deliver data engineering solutions with minimal oversight. You’ll work closely with data analysts developers and external partners to design and deliver complex data extracts resolve data quality issues and contribute to the long-term success of our cloud data infrastructure. Responsibilities: Translate business and data analyst requirements into robust ETL workflows Build and optimize SQL-based data pipelines using Teradata or Snowflake Analyze data sets and investigate data quality issues or anomalies Communicate findings and coordinate resolutions with both internal teams and external vendors Write and troubleshoot complex SQL queries to support healthcare data extracts Review and interpret complex data mappings to support developers and improve data accuracy Participate in weekly progress check-ins with your manager and team Must-Have Qualifications: Bachelor's degree in Computer Science Engineering or related field 5+ years of professional experience as a Data Engineer or SQL Developer Expertise in Teradata or Snowflake and complex SQL query development Strong understanding of ETL concepts with hands-on experience using Talend or Informatica Excellent problem-solving skills and data analysis experience Ability to work independently and own deliverables end-to-end Clear proactive communication skills Nice-to-Haves: Previous experience with healthcare datasets Background in cloud-based data tools and modern data warehouse environments Experience with Talend Informatica or other ETL platforms What You’ll Gain: The opportunity to work on meaningful projects that support member and provider health outcomes Exposure to cutting-edge technologies like Snowflake cloud tools and scalable ETL architectures Collaborative team environment with a strong mentoring culture Flexibility and autonomy in a remote-friendly role Disqualifier: No practical experience with SQL
|
|
Lead Data Engineer (Remote)
Circana UK |
Remote United Kingdom
|
At Circana we are fueled by our passion for continuous learning and growth we seek and share feedback freely and we celebrate victories both big and small in an environment that is flexible and accommodating to our work and personal lives. We have a global commitment to diversity equity and inclusion as we believe in the undeniable strength that diversity brings to our business employees clients and communities. With us you can always bring your full self to work. Join our inclusive committed team to be a challenger own outcomes and stay curious together. Circana is proud to be Certified by Great Place To Work. This prestigious award is based entirely on what current employees say about their experience working at Circana. Learn more at www.circana.com. What will you be doing? We are seeking a skilled and motivated Data Engineer to join a growing team Global Team based in the UK. In this role you will be responsible for designing building and maintaining robust data pipelines and infrastructure on the Azure cloud platform. You will leverage your expertise in PySpark Apache Spark and Apache Airflow to process and orchestrate large-scale data workloads ensuring data quality efficiency and scalability. If you have a passion for data engineering and a desire to make a significant impact we encourage you to apply! Job Responsibilities Data Engineering & Data Pipeline Development Design develop and optimize scalable DATA workflows using Python PySpark and Airflow Implement real-time and batch data processing using Spark Enforce best practices for data quality governance and security throughout the data lifecycle Ensure data availability reliability and performance through monitoring and automation. Cloud Data Engineering : Manage cloud infrastructure and cost optimization for data processing workloads Implement CI/CD pipelines for data workflows to ensure smooth and reliable deployments. Big Data & Analytics: Build and optimize large-scale data processing pipelines using Apache Spark and PySpark Implement data partitioning caching and performance tuning for Spark-based workloads. Work with diverse data formats (structured and unstructured) to support advanced analytics and machine learning initiatives. Workflow Orchestration (Airflow) Design and maintain DAGs (Directed Acyclic Graphs) in Airflow to automate complex data workflows Monitor troubleshoot and optimize job execution and dependencies Team Leadership & Collaboration Lead a team of data engineers providing technical guidance and mentorship Foster a collaborative environment and promote best practices for coding standards version control and documentation. Requirements This a client facing role strong communication and collaboration skills are vital Experience in data engineering with expertise in Azure PySpark Spark and Airflow. Strong programming skills in Python SQL with the ability to write efficient and maintainable code Deep understanding of Spark internals (RDDs DataFrames DAG execution partitioning etc.) Experience with Airflow DAGs scheduling and dependency management Knowledge of Git Docker Kubernetes Terraform and apply best practices of DevOps for CI/CD workflows Excellent problem-solving skills and ability to optimize large-scale data processing. Experience in leading teams and working in Agile/Scrum environments A proven track record of working effectively global remote teams Desirable: Experience with data modelling and data warehousing concepts Familiarity with data visualization tools and techniques Knowledge of machine learning algorithms and frameworks Circana Behaviours As well as the technical skills experience and attributes that are required for the role our shared behaviours sit at the core of our organization. Therefore we always look for people who can continuously champion these behaviours throughout the business within their day-to-day role: Stay Curious: Being hungry to learn and grow always asking the big questions. Seek Clarity: Embracing complexity to create clarity and inspire action. Own the Outcome: Being accountable for decisions and taking ownership of our choices. Centre on the Client: Relentlessly adding value for our customers. Be a Challenger: Never complacent always striving for continuous improvement. Champion Inclusivity: Fostering trust in relationships engaging with empathy respect and integrity. Commit to each other: Contributing to making Circana a great place to work for everyone. Location This position can be located in the following area(s): Remote or Bracknell UK #LI-KM1
|
|
Lead Data Engineer Remote Working - 6 Months Contract Extendable based on Performance
EPS Ventures Sdn Bhd |
Remote Malaysia
|
Job Scope : 13+ years of experience in: Designed and implemented scalable data architectures to support BI reporting and analytics. Architected data pipelines using ETL/ELT processes to integrate data from diverse sources. Led a team of data engineers in developing data solutions and managing end-to-end data projects. Mentored junior engineers and provided technical guidance and best practices in data engineering. Utilized big data technologies such as Hadoop Spark and Kafka for real-time data processing. Implemented distributed data storage solutions using HDFS Cassandra or similar technologies. Deployed and managed data infrastructure on cloud platforms like AWS Azure or Google Cloud. Utilized cloud-based services such as AWS S3 Redshift Athena and Glue for data storage and processing. Collaborated with BI teams to develop data models and support analytical reporting requirements. Collaborated closely with cross-functional teams. Hurry if you are the one we are seeking for kindly send your resume at this email suganthi@eps.my for a quick chat!!! Job Type: Contract Contract length: 6 months Pay: RM13000.00 - RM18000.00 per month Schedule: Monday to Friday Application Question(s): Are you an Immediate? What is your expected salary? Experience: Data Engineer: 10 years (Required)
|
|
Data Engineer Lead
muttdata |
Argentina
|
🚀 Join Our Data Products and Machine Learning Development Startup! 🚀Mutt Data is a dynamic startup committed to crafting innovative systems using cutting-edge Big Data and Machine Learning technologies.We are looking for a Data Engineer Lead who can help us take our team and expertise to the next level. If you're someone who thrives on building systems devising creative solutions learning new tools and designing innovative strategies and architectures we would love to get to know you! 🐶We harness the power of technologies like Spark Airflow Kubernetes MLFlow Kafka DBT Airbyte DuckDB Apache Pinot SQL and NoSQL Databases among many others across different cloud providers like Amazon Web Services Google Cloud Platform and platforms like Astronomer and DataBricks.➤ Our Reach: We collaborate with tech startups and major corporations in Argentina the United States Brazil Colombia Spain and Uruguay. Our team excels in developing and maintaining large-scale production data systems with exceptional technical expertise. 🌎➤ Our Partnerships: We're proud to be associated with organizations like AWS Astronomer Google Cloud Kaszek H2O.ai Product Minds and Soda. 🤝🏡 Built for a remote life: Mutt Data is remote-first and remote-always. We’ve designed our culture communications and tools to support a distributed team since the beginning. Being remote by design allows Mutt Data to be thoughtful and intentional about creating diverse teams and supporting them with a work environment that fits their lives. With a generous PTO policy and Slack channels for every interest our culture embraces the things happening in your life. Maybe you need to adjust your schedule to care for your family or take a bike ride. At Mutt Data that’s embraced. These are some of the problems we solve:➤ Building modern Data Stacks 🛠️➤ Real-Time Advertising Auction Systems 📊➤ Scalable Cloud Architectures ☁️➤ Applied Machine Learning to solve business problems🤖➤ Promotions optimization🔍Read about our case studies here Responsibilities 🤓➤ Solution Design: Brainstorm and craft solutions that address use cases leveraging your expertise to provide guidance on potential solution paths and how to overcome obstacles.➤ Cross-Functional Liaison: Act as a liaison between the technology team and the product/business side adept at handling scope changes and resource/task prioritization when required.➤ Team Leadership: Lead teams collaborating with Machine Learning and Data Engineers.➤ Technical Support: Assist the team in code development offer technical guidance and conduct code reviews.➤ Technical Interviews: Collaborate on hiring interview processes (exam reviews and technical interviews).➤ Communication and Feedback:Actively participate in discussions on issues schedule meetings and provide peer feedback while assisting others in achieving their technical career goals.➤ Technology Research: Explore and develop new technologies to enhance Mutt Data's toolset and adopt best practices.➤ System Development: Conceptualize define present advocate prototype construct manage and maintain data systems.➤ Proof of Concepts: Develop proof of concepts create machine learning models construct dashboards APIs and data platforms.➤ Project Management:Closely collaborate on managing projects by engaging with customers to understand the scope of work.Required Skills 💻✓ Software Engineering and Development Experience (Minimum 4+ Years)✓Experience in Data Pipelines: A strong background building data pipelines for analytics or Machine Learning.✓ Team Leadership and Client Interactions✓ Advanced Python Knowledge✓ Solid SQL Knowledge✓ Customer Requirements Implementation: The ability to interpret and implement customer's technical requirements.✓ Analytical Data Systems: Experience in building analytical data systems using Modern Data Warehouses (e.g. BigQuery Redshift Snowflake) or Data Lakes (e.g. Databricks AWS S3 Presto EMR Glue etc.).✓ Distributed Computing: Experience in building distributed data pipelines using Spark Python and SQL.✓ Hypermodern Python Stack: Proficiency in the Hypermodern Python Stack including state-of-the-art tools like Poetry code formatters (black) linters (flake8 pylint etc.) testing libraries (pytest hypothesis etc.) type checking static analysis and tooling for continuous integration and delivery.✓ Familiarity with the Modern Data Stack: Like DBT Airflow and Airbyte.✓ Spanish Advanced Level.Nice to have skills 😉✓ AWS or GCP Management✓ Code Hygiene: Strong commitment to code hygiene including code review documentation testing and CI/CD (Continuous Integration/Continuous Delivery).✓ Stream Processing Tools: Knowledge of stream processing tools such as Kafka Kinesis Spark Streaming etc.✓ Python's Scientific Stack: Like numpy pandas jupyter matplotlib scikit-learn and related tools.✓ English Intermediate Level.Benefits 😎🎉 Social Paid Events🏢 Worknmates Coworking Spaces😎 Mutt Week: Get an additional week of vacation each year.📚 Paid AWS and GCP Certification Exams: We cover the costs of your Amazon Web Services (AWS) and Google Cloud Platform (GCP) certification exams and study materials.🎈 Birthday Free Day🗣️ In-Company English Lessons👥 Referral Bonuses🏡 Remote First Culture: Benefit from flexible working hours and locations.🌼 Annual Mutters' Day: Join in celebrating our annual Mutters' Day.✈️ Annual Mutters' Trip: Participate in our exciting annual Mutters' trip.Even if your experience only meets some of the bullets on the above lists we'd love to learn more about you and why you think Mutt Data is the next step in your career 🙌😊Are you interested in joining our team? Send us your resume. We can’t wait to meet you! 🤝 ➡ ➡
|
* unlock: sign-up / login and use the searches from your home page
** job listings updated in real time 🔥
Login & search by other job titles, a specific location or any keyword.
Powerful custom searches are available once you login.