CI&T logo

DataBricks Specialist - Short term contract

CI&T
Contract
On-site
London, ENG

JobsCloseBy Editorial Insights

CI&T seeks a Senior Data Engineer and Databricks Specialist for a short-term contract onsite in London with an immediate start. You will design and build robust data pipelines that integrate data from APIs, databases and files, partnering with business and analytics teams to deliver high-quality, governed data. The role emphasizes DataOps, Git, CI/CD, automated tests, and infrastructure as code to ensure security and reliability. Required: extensive Databricks experience with PySpark, Delta Lake and SQL, strong Python and SQL, ETL/ELT, cloud data services, and problem-solving. To apply, tailor your CV to showcase concrete projects with metrics, confirm onsite availability in London, and highlight collaboration with business stakeholders.


Immediate Start
As a Senior Data Engineer, you will lead the design and development of robust data pipelines, integrating and transforming data from diverse data sources such as APIs, relational databases, and files. Collaborating closely with business and analytics teams, you will ensure high-quality deliverables that meet the strategic needs of our organization. Your expertise will be pivotal in maintaining the quality, reliability, security and governance of the ingested data, therefore driving our mission of Collaboration, Innovation, & Transformation.
Key Responsibilities:
Develop and maintain data pipelines.Integrate data from various sources (APIs, relational databases, files, etc.).Collaborate with business and analytics teams to understand data requirements.Ensure quality, reliability, security and governance of the ingested data.Follow modern DataOps practices such as Code Versioning, Data Tests and CI/CDDocument processes and best practices in data engineering.
Required Skills and Qualifications:
Must-have Skills:Proven experience in building and managing large-scale data pipelines in Databricks (PySpark, Delta Lake, SQL).Strong programming skills in Python and SQL for data processing and transformation.Deep understanding of ETL/ELT frameworks, data warehousing, and distributed data processing.Hands-on experience with modern DataOps practices: version control (Git), CI/CD pipelines, automated testing, infrastructure-as-code.Familiarity with cloud platforms (AWS, Azure, or GCP) and related data services.Strong problem-solving skills with the ability to troubleshoot performance, scalability, and reliability issues.Proficiency in Git.