The role contributes to a variety of exciting projects ranging from designing robust and automated Data Pipelines and Storage processes to building tools for improving company-wide productivity with data. It’s more about designing, implementing, and operating stable, scalable, and efficient solutions to flow data from different sources into the data lake and other databases. You will work as a stakeholder to bring the data into standard/queryable format and empower the company to make data-driven decisions.
Responsibilities
Designing, implementing, and maintaining scalable data pipelines
Proposing new data architecture for new requirements and fine-tune the existing ones
Working with data scientists team as well as other teams and meeting their requirements
Monitoring data services and resolving issues in case of any incident
Requirements
BS/MS or more in computer engineering/science or related experience
At least 2 years of experience in working with Python, Java, or Scala
Hands-on experience in Linux, Virtualization, Docker, and Kubernetes
Specialized in Hadoop ecosystem (HDFS, Yarn, Hive)
Hands-on experience with Kafka
Familiar with monitoring systems (Grafana, Prometheus, Exporters)
Experience working with Logstash, Clickhouse, and MySQL
SQL Knowledge
Experience with data exploration and data visualization like Hue, Superset
Preferred Qualifications
Experienced in Agile / Scrum / DevOps projects
Experience in streaming technologies like Spark, Apache Flink, Nifi
Experience working with one or more of these: Airflow, Debezium, Confluent Schema Registry
معرفی شرکت
اسنپ محصولی ایرانی است که از تیمی خلاق، جوان و تحصیلکرده قدرت میگیرد و در تلاش است تا صنعت فناوری اطلاعات و ارتباطات را به زندگی روزمره جامعه پیوند بزند.
اهداف بزرگی در سر داریم و بلند پروازیم. قصد داریم اسنپ را به بهترین راهکار برای سفرهای درونشهری ایران تبدیل کنیم و در این راه به کمک افراد خلاق، سختکوش و بلندپرواز احتیاج داریم. اگر چنین خصوصیاتی دارید خوشحال میشویم که رزومه خود را برایمان ارسال کنید.