Date: 3 weeks ago
City: Roswell, Georgia
Contract type: Full time

Job Summary
Kloeckner Metals Corporation is seeking a highly skilled and motivated Data Engineer to join our growing data team. The ideal candidate will possess a strong foundation in data warehousing concepts, including ETL/ELT processes, dimensional modeling, and semantic layer development. This role requires a hands-on approach to building and maintaining robust data pipelines and infrastructure, with a focus on automation and efficiency. You will be responsible for designing, developing, and deploying data solutions that empower our organization to make data-driven decisions.
Job Responsibilities
Kloeckner Metals Corporation is seeking a highly skilled and motivated Data Engineer to join our growing data team. The ideal candidate will possess a strong foundation in data warehousing concepts, including ETL/ELT processes, dimensional modeling, and semantic layer development. This role requires a hands-on approach to building and maintaining robust data pipelines and infrastructure, with a focus on automation and efficiency. You will be responsible for designing, developing, and deploying data solutions that empower our organization to make data-driven decisions.
Job Responsibilities
- Data Pipeline Development
- Design, build, and maintain efficient ETL/ELT pipelines to process large-scale structured and unstructured data.
- Leverage Spark (PySpark/SQL) and Python for data transformation and processing.
- Integrate data from diverse sources, including databases, APIs, and cloud platforms.
- Utilize MS Fabric for data lake, data warehouse, and data pipeline development
- Data Modeling & Warehousing
- Develop and optimize dimensional models, fact tables, and views to support analytics and reporting.
- Design semantic models for BI tools (e.g., Power BI) to ensure accurate and accessible data for stakeholders.
- Build and optimize data warehouses and data marts using dimensional modeling techniques (star schema, snowflake schema).
- Write complex and efficient SQL queries for data extraction, transformation, and loading.
- Develop and implement data transformations and data quality checks using Python and Spark.
- Design and implement data integration solutions for various data sources, including Oracle databases.
- Automation & Integration
- Automate repetitive data workflows using Power Automate and API integrations.
- Build and maintain APIs to facilitate seamless data exchange between systems.
- Implement monitoring and alerting systems for pipeline reliability.
- Collaboration & Support
- Collaborate with data analysts and business stakeholders to understand data requirements and deliver effective solutions.
- Support Power BI report development by ensuring clean, transformed data availability.
- Performance Optimization
- Tune SQL queries, Spark jobs, and database configurations for speed and efficiency.
- Monitor and troubleshoot data pipeline performance and data quality issues.
- Documentation & Best Practices
- Document pipelines, data models, and processes.
- Advocate for data governance, security, and quality standards.
- Implement data security and access control measures
- Bachelor's degree in Computer Science, Data Science, or a related field.
- Proven minimum 2-3 years of experience as a Data Engineer or in a similar role.
- Strong understanding of ETL/ELT processes and data warehousing concepts.
- Expertise in dimensional modeling (star schema, snowflake schema).
- Advanced SQL skills, including query optimization and performance tuning.
- Proficiency in Python and Spark for data processing and transformation.
- Understanding of API and using API for data integration.
- Experience with automating processes using Power Automate.
- Strong problem-solving and analytical skills.
- Excellent communication and collaboration skills.
- Ability to work independently and as part of a team.
- Experience with version control systems (e.g., Git).
- Experience with Microsoft Fabric and Power BI, including DAX and Power Query.
- Familiarity with Oracle databases, including PL/SQL.
- Experience in cloud-based data warehousing solutions (e.g., Azure Synapse Analytics).
- Knowledge of data governance and data quality principles.
- Experience working with cloud platforms (Azure, AWS, GCP) for data engineering solutions.
- Familiarity with Lakehouse architectures and Delta Lake.
- Knowledge of CI/CD for data pipelines using DevOps tools.
See more jobs in Roswell, GA