Emploi

Trouvez facilement votre premier job

Découvrir

L'actualité professionnelle des 18-30 ans

Découvrir
Finance

Découvrez les aides financières auxquelles vous êtes éligible

Découvrir
Santé
🎁 1 mois gratuit

La mutuelle qui prend soin de la santé des jeunes

Découvrir
Mobilité

Révisez le code de la route à partir de 9,90€

Découvrir

Offers “Amazon”

days ago Amazon

Data Engineer III - AMZ4344

  • Internship
  • Seattle ( King )
  • IT development

Job description



DESCRIPTION

MULTIPLE POSITIONS AVAILABLE

Entity: Amazon.com Services LLC

Position: Data Engineer III

Location: Seattle, WA

Position Responsibilities:

Design, implement, and support ETL pipelines for enterprise-scale datasets, using SQL and Big Data tools, such as Hive, Pig, Spark, and Presto. Model data and metadata to support adhoc and pre-built reporting. Interface with business customers, gather requirements, and deliver complete BI solutions. Tune application and query performance using Unix profiling tools and SQL. Provide guidance and support for engineers with industry best practices and direction. Evaluate and select tools for processing data, including both internally developed and industry tools.

Amazon.com is an Equal Opportunity-Affirmative Action Employer – Minority / Female / Disability / Veteran / Gender Identity / Sexual Orientation #0000

PREFERRED QUALIFICATIONS

Please see job description and position requirements above.

Ideal candidate profile



BASIC QUALIFICATIONS

Position Requirements:

Bachelor's degree or foreign equivalent in Computer Science, Computer Engineering, Information Systems, Information Systems Management, or a related field and four years of experience in the job offered or as a Data Engineer, Data Integration Engineer, Business Intelligence Engineer, or a related occupation. Two years of experience in the job offered or related occupation must include: coding proficiency in at least one modern programming language; troubleshooting and tuning SQL queries; designing and modeling enterprise data sets and ETL pipelines; writing SQL and Unix scripts; data warehousing; reading, writing, and debugging data processing and orchestration code; and experience using a big data technology (Hadoop, Hive, Hbase, Pig, or Spark).