Offers “Amazon”

Expires soon Amazon

Sr Data Engineer

  • Internship
  • Seattle (King)
  • IT development

Job description



DESCRIPTION

AWS Support is one of the largest and fastest growing business units within AWS. We are a highly technical, innovative organization revolutionizing the customer engagement processes and offers topnotch technical support for the portfolio of products and features of AWS. We are determined to redefine the word “Support” and lead the industry with best in class technology.

We are looking for an experienced, self-driven, analytical, and strategic Sr. Data Engineer. In this role, you will work across a large and complex data lake/warehouse environment. You are passionate about working with disparate datasets in order to bring data together to answer business questions. You should have deep expertise in the creation and management of datasets and the proven ability to translate the data into meaningful insights through collaboration with product managers, software development engineers, business intelligence engineers, operation managers and leaders. In this role, you will have ownership of end-to-end development of data engineering solutions to complex questions and you’ll play an integral role in strategic decision-making.

In this role, you will have the opportunity to display and develop your skills in the following areas:
· Interface with PMs, business customers, and software developers to understand requirements and implement solutions
· Design, develop, and operate highly-scalable, high-performance, low-cost, and accurate data pipelines in distributed data processing platforms with AWS technologies providing ad hoc access to large datasets and computing power
· Explore and learn the latest AWS big data technologies, evaluate and make decisions around the use of new or existing software products to design the data architecture
· Recognize and adopt best practices in data processing, reporting, and analysis: data integrity, test design, analysis, validation, and documentation

PREFERRED QUALIFICATIONS

· Authoritative in ETL optimization, designing, coding, and tuning big data processes using Apache Spark or similar technologies.
· Experience with building data pipelines and applications to stream and process datasets at low latencies.
· Demonstrate efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data.
· Sound knowledge of distributed systems and data architecture (lambda)- design and implement batch and stream data processing pipelines, knows how to optimize the distribution, partitioning, and MPP of high-level data structures.
· Experience with full software development life cycle, including coding standards, code reviews, source control management, build processes, and testing.

Desired profile



BASIC QUALIFICATIONS

· Bachelor’s degree in Computer Science or related technical field, or equivalent work experience.
· 7+ years of work experience with ETL, Data Modeling, and Data Architecture.
· 7+ years of writing and optimizing SQL.
· Proven experience in with SQL and large data sets, data modeling, ETL development, and data warehousing, or similar skills.
· Experience with AWS technologies stack including Redshift, RDS, S3, EMR or similar solutions build around Hive/Spark etc.
· Experience operating very large data warehouses or data lakes.

Make every future a success.
  • Job directory
  • Business directory