Job Details
Location:
7715, Chevy Chase Drive, Georgian Acres, Austin, Travis County, Texas, 78752, USA
Posted:
May 27, 2020
Job Description
We Are Hiring
Data Engineer - Full-Time Day - Austin, TX, St. Louis, MO, Chicago, IL, Nashville, TN or Indianapolis, IN
Why Join Ascension?
Ascension Information Services is one of the nation’s largest healthcare information technology services organizations. We provide Ascension and its subsidiaries low-cost, high-value IT infrastructure and software application services that:
• Support rapid and effective clinical decision making
• Improve efficiency and care transitions • Foster information sharing across the continuum of care
• Make knowledge and data actionable, leading to improved patient outcomes
What You Will Do
Responsible for construction and development of "large-scale cloud data processing systems" the Data Engineer must have considerable expertise in data warehousing and job requires proven coding expertise with Python, Java, SQL, and Spark languages. Must be able to implement enterprise cloud data architecture designs, and will work closely with the rest of the scrum team and internal business partners to identify, evaluate, design, and implement large scale data solutions, structured and unstructured, public and proprietary data. The Data Engineer will work iteratively on the cloud platform to design, develop and implement scalable, high performance solutions that offer measurable business value to customers.
Desired Skills/Experience:
- Proficient in multiple programming languages, frameworks, domains, and tools. Coding skills in SQL, Python, Java, Scala, and Spark Experience with technologies like HDFS, Spark, Hive, Shell scripting, and bash preferred.
- Experience with version control platform github.
- Expertise with development ecosystem such as Git, Jenkins, Artifactory, CI/CD, and Terraform.
- Experience with gcp platform development tools Pub/sub, cloud storage, big table, big query, data flow, data proc, and composer desired.
- Strong Linux/Unix background and hands on knowledge. Knowledge in Hadoop and cloud platforms and surrounding ecosystems.
- Experience with web services and APIs as in RESTful and SOAP.
- Experience unit testing code. Ability to document designs and concepts API Orchestration and Choreography for consumer apps
- Well rounded technical expertise in Apache packages and Hybrid cloud architectures
- Pipeline creation and automation for Data Acquisition
- Metadata extraction pipeline design and creation between raw and finally transformed datasets
- Quality control metrics data collection on data acquisition pipelines
- Able to collaborate with scrum team including scrum master, product owner, data analysts, Quality Assurance, business owners, and data architecture to produce the best possible end products
- Experience contributing to and leveraging jira and confluence.
- Strong experience working with real time streaming applications and batch style large scale distributed computing applications using tools like Spark, Kafka, Flume, pubsub, and airflow.
- Ability to work with different file formats like Avro, Parquet, and JSON.
- Managing and scheduling batch jobs.
- Hands on experience in Analysis, Design, Coding and Testing phases of Software Development Life Cycle (SDLC).
- Ability to advise management on approaches to optimize for data platform success.
- Able to effectively communicate highly technical information to numerous audiences, including management, the user community, and less-experienced staff.
- Consistently communicate on status of project deliverables
- Consistently provide work effort estimates to management to assist in setting priorities
- Deliver timely work in accordance with estimates
- Solve problems as they arise and communicate potential roadblocks to manage expectations
- Adhere strictly to all security policies
Required Work Experience:
- Four to seven years of experience.
- Minimum number years of relevant experience: 2 Years
- Some of the minimum experience requirement may be met with Masters or other advanced degree
- Coding experience with python, java, and spark.
- Strong Linux/Unix background and hands on knowledge.
- Past experience with big data technologies like HDFS, Spark, Impala, Hive, Shell scripting, and bash.
- Experience with version control platform github
- Experience with development ecosystem such as Jenkins, Artifactory, CI/CD, and Terraform.
- Demonstrated ability to manage multiple projects and strong analytical skills
- Must possess excellent written and verbal communication skills
- Ability to understand and analyze complex data sets
- Ability to assume responsibility and maintain confidentiality
Qualifications/Education:
- Master level technology degree preferred
- Technology certifications preferred
- Bachelor's level degree required
What You Will Need
Education:
- High school diploma/GED with 2 years of experience, or Associate's degree, or Bachelor's degree required.
Work Experience:
- 1 year of experience required.
- 4 years of experience preferred.
- 2 years of leadership or management experience preferred.
Equal Employment Opportunity
Ascension Technologies is an EEO/AA Employer M/F/Disability/Vet. Please click the link below for more information.
http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf EEO is the Law Poster Supplement
http://www.dol.gov/ofccp/regs/compliance/posters/pdf/ofccp_eeo_supplement_final_jrf_qa_508c.pdfE-Verify Statement
Ascension Technologies participates in the Electronic Employment Verification Program. Please click the E-Verify link below for more information.
E-Verify (link to E-verify site)