BASF Veterans Jobs

Job Information

COTIVITI, INC. Data Engineer in SOUTH JORDAN, Utah

Data Engineer Job Locations

US-Remote ID

2024-12911

Category Engineering/IT  

Position Type Full-Time Overview

Cotiviti is a leading solutions and analytics company that leverages unparalleled clinical and financial datasets to deliver deep insight into the performance of the healthcare system. These insights uncover new opportunities for healthcare organizations to collaborate to improve their financial performance, reduce inefficiency, and improve healthcare quality.

The Data / Implementation Engineer is responsible for the design and development of software components especially ETL pipeline associated with onboarding client data to a distributed environment, in compliance with predefined coding standards and technical design. This person will collaborate effectively and work with Senior Developers, QA, Product Owners, Project Management, and other stake holders. Cotiviti develops highly innovative applications in healthcare analytics to generate performance, improvement opportunities and value for our clients.

Responsibilities

Design and develop high quality, maintainable software modules for the Cotiviti, Inc. product suite. * Conduct unit and integration testing using appropriate methodology and techniques * Analyze requirements and specifications and create detailed designs for implementation. * Analyze and resolve software related issues originated from internal or external customers. * Continuously update professional knowledge of new technologies as selected and integrated into the Cotiviti, Inc. product suite * Review software engineering approach to proposed solutions to ensure adherence to best practice * Complete all responsibilities as outlined on annual Performance Plan. * Must be able to perform duties with or without reasonable accommodation. * Work as a team member in the creation and maintenance of ETL scripts, tools, queries, and applications used for healthcare data management, data validation, statistical report generation, and program validation. * Develop Spark jobs to identify potential overpayment opportunities using Pyspark SQL on a HDFS distributed environment.

Qualifications

  • Strong working knowledge of ETL, database technologies, big data and data processing skills
  • 2+ years of experience developing applications using Hadoop, Spark, Impala, Hive, Python.
  • 2+ years of experience in running, using and troubleshooting the ETL Cloudera Hadoop Ecosystem i.e. Hadoop FS, Hive, Impala, Spark, Kafka, Hue, Oozie, Yarn, Sqoop, Flume.
  • Health care claim data knowledge is highly preferred.
  • Experience processing large amounts of structured and unstructured data with Spark.
  • Experience in SQL and Relation Database developing data extraction applications.
  • Experience with data movement and transformation technologies.
  • Experience with Python development.
  • Have a good understanding of the E2E process of the application.
  • Good understanding/exposure to API development
  • Java experience with Spring framework preferred.

Mental Requirements: * Communicating with others to exchange information. * Assessing the accuracy, neatness, and thoroughness of the work assigned.

Physical Requirements and Working Conditions: * Remaining in a stationary position, often standing or sitting for prolonged periods. * Repeating motions that may include the wrists, hands, and/or fingers. * Must be able to provide a dedicated, secure work area. * Must be able to provide high-speed internet access/connectivity and office setup and maintenance. * No adverse environmental conditions expected. Base compensation ranges from $75,000.00 to $105,000.00. Specific offers are determined by various factors, such as experience, education,... For full info follow application link.

Equal Opportunity Employer/Protected Veterans/Individuals with Disabilities

DirectEmployers