Offers “Amazon”

Expires soon Amazon

Data Engineer

  • Internship
  • Seattle (King)
  • Studies / Statistics / Data

Job description



DESCRIPTION

Description
At Amazon, we are committed to being the most customer-centric company on earth. The North American Supply Chain Organization (NASCO) is comprised of high-powered dynamic teams which are shaping network execution through the development and application of innovative supply chain management concepts. Our goal is to improve and enhance the Amazon fulfillment network to ultimately drive the best customer experience in a reliable and cost-efficient manner.

Within NASCO, Amazon's Inbound Supply Chain (IBSC) team is looking for a creative, self-motivated, experienced, and highly curious individual with strong data engineering skills to join our inbound supply chain team as a data engineer. The ideal candidate will have demonstrated experience building, optimizing, and maintaining large data infrastructures. A solid understanding of the architecture for enterprise-level data warehouse solutions using multiple platforms is required. The candidate is expected to build efficient, flexible, extensible, and scalable Extract-Transform-Load (ETL) services as well as reporting solutions. Ideally, the candidate should be enthusiastic about learning new technologies to implement solutions that provide new functionality to users or scale existing platforms. Above all, the candidate should be passionate about working with huge data sets to answer critical business questions and drive change. The candidate should be able to work with business owners to develop and define key business questions and then build the data sets to answer those questions. Our ideal candidate thrives in a fast-paced environment, relishes working with large transactional volumes of big data, enjoys the challenge of highly complex business contexts – that's are typically being defined in real time – and is passionate about data and providing excellent support to business and data professional stakeholders. The data engineer will work hand-in-hand with IBSC business analysts, business intelligence engineers (BIEs), and data scientists. The candidate must be a self-starter comfortable with ambiguity, with strong attention to detail, an ability to work in a fast-paced, high-energy and ever-changing environment, coupled with a strong technical skill set, taking a data-driven approach to solve complex problems across our inbound supply chain. This individual is a great communicator – both written and verbal – with demonstrated experience working cross-functionally across many stakeholder teams which may have competing priorities. The position provides opportunities to influence one of the most high visibility and high impact areas in NASCO.

Critical Leadership Principles
· Deliver Results
· Bias for Action
· Dive Deep
· Learn and Be Curious

Responsibilities
· Retrieve and synthesize data from multiple sources, and present critical insights in a format that is immediately useful to answering specific questions or improving performance.
· Given anomalies, whether anecdotal or identified automatically, deep dive to explain why they happen, and identify fixes.
· Conduct written and verbal presentation to share insights and recommendations to audiences of varying levels of technical sophistication.
· Redshift maintenance, optimization, and assurance that large queries run smoothly.
· Provide Amazon Web Services (AWS) and Lightweight Directory Access Protocol (LDAP) group permissions management for cluster access.
· Manage a large fleet of Linux hosts with responsibilities for acquisition, package management, and code deployment.
· Hoot table subscription management and maintenance and other table creation through load jobs.
· Coaching other data professionals (e.g., business analyst, BIEs, and data scientists) with data engineering concepts and AWS technology.
· Development of self-serve BI environment using AWS solutions (e.g. QuickSight).
· Trouble Ticket Management and ownership of team Wiki.

Desired profile



BASIC QUALIFICATIONS

· Bachelors Degree in computer science, data engineering, or a related field.
· 2+ years' of relevant experience in dimensional data modeling, Extract-Transform-Load (ETL) development, and data warehousing.
· Data warehousing experience with Oracle, Redshift, Teradata, etc.
· Experience with relevant big data technologies (e.g., Hadoop, Hive, Hbase, Pig, Spark, etc.).
· Strong fluency and experience in functional programming languages (e.g., Scala, Python, Perl, etc.).
· Experience articulating business questions into analytical questions, and using quantitative techniques to arrive at a solution using available data.
· Experience processing, filtering, and presenting large quantities (millions to billions of rows) of data.
· Excellent written and verbal communication skills on quantitative topics.

Make every future a success.
  • Job directory
  • Business directory