Offers “Amazon”

Expires soon Amazon

Data Engineer

  • Internship
  • Amsterdam (Montgomery County)
  • IT development

Job description



DESCRIPTION

Amazon Web Services is seeking an extraordinary Data Engineer to join the AWS Billing team.

Amazon Web Services (AWS) is looking for talented data engineers who have a passion for Big Data and distributed systems at trillions of transactions scale to help build the next generation of AWS internal services. Our applications are responsible for processing 270 million events per second and 20 terabytes of data per hour. As a foundational system we must scale with the growth of cloud computing at Amazon. The AWS Billing team is responsible for metering usage and generating monthly charges. These include but are not limited to enabling: AWS product pricing, AWS product subscriptions, AWS product discount programs, customer credit management, storing AWS product usage, computing the bill, the estimated bill, computing tax, and storing bills and line items for external customer consumption.

As a Data Engineer, you have the opportunity to lead the paradigm shift in streaming Big Data by building applications on top of cutting-edge AWS technologies such as Kinesis, EMR, DynamoDB, Redshift, Aurora, and many more. Additionally, you can build meaningful software that can radically change how AWS wins our largest customers over to the Cloud. And finally, as an Amazon engineer, you get to own the full lifecycle of your systems, work on challenging problems at "Amazon Scale", and collaborate with some of the best in the business.

PREFERRED QUALIFICATIONS

· Authoritative in ETL optimization, designing, coding, and tuning big data processes using Apache Spark or similar technologies.
· Experience with building data pipelines and applications to stream and process datasets at low latencies.
· Demonstrate efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data.
· Sound knowledge of distributed systems and data architecture (lambda)- design and implement batch and stream data processing pipelines, knows how to optimize the distribution, partitioning, and MPP of high-level data structures.
· Knowledge of Engineering and Operational Excellence using standard methodologies

Desired profile



BASIC QUALIFICATIONS

· 5+ years of work experience with ETL, Data Modeling, and Data Architecture.
· Expert-level skills in writing and optimizing SQL.
· Experience with Big Data technologies such as Hive/Spark.
· Proficiency in one or more of the following languages - Python, Ruby, Java or similar.
· Experience operating very large data warehouses or data lakes.
· Proven interpersonal skills and standout colleague.
· A real passion for technology. We are looking for someone who is keen to demonstrate their existing skills while trying new approaches.

Make every future a success.
  • Job directory
  • Business directory