Senior Big Data Engineer is responsible for design and implementation of core technologies associated with our organization Big Data analysis and analytics technical infrastructure. As a core member of a high-performance team, ensure Big Data pipelines are consistently and reliably maintained and analytics capabilities are delivered at an optimum level, helping the organization identify insights from a large number of diverse datasets.
Responsibilities and Tasks
Enables Big Data, batch and real-time analytical processing solutions leveraging emerging technologies and builds and architects next-generation Big Data analytics frameworks:
● Translates complex functional and technical requirements of data science team use cases into detailed architecture, design and high performance Data Pipeline and ensures analytics infrastructure and associated systems meet business requirements and industry best practices.
● Builds automation tools; ensures all automated processes preserve data by managing the alignment of data availability and integration processes.
● Performs technology and product research to better define requirements, resolve important issues and improve the overall capability of the big data technology stack.
● Expands and grows data platform capabilities to solve new data problems and challenges.
● Supports standardization of documentation and the adoption of standards and practices related to data and applications.
● Working both independently and in collaboration with our data integration developers, data scientists, designs and builds high-performance algorithms, prototypes, predictive models and proof of concepts.
What You Need to Be Successful:
● 3+ years of experience designing, implementing and supporting systems in a large scale analytics or distributed data engineering environment containing different disparate application systems and multiple data sources.
● At least 3 years of professional software development experience using an applicable programming language (Python and/or Scala, Java or equivalent)
● Solid Familiarity with a subset of common big data technologies, stack and frameworks (such as Hadoop , Kafka, Spark, Kylin, SMACK, Kudo, Rapids)
● Specially 3+ years of experience working with big data ingestion layer from live services, including the continuous of ,ETL/data pipelines processing, storage, and governance.
● Bachelor’s or Master Degree in Computer Science, Mathematics, Information Systems, Computer Engineering or closely related field or equivalent related professional experience .
Will be great if have following qualifications:
● Familiarity with Azure’s analytics stack - Data Lake, Data Explorer/Kusto, Storage, Data Factory, Synapse, Data Bricks, HDInsight (with Spark, Hive, Hadoop, etc.)
● Knowledge of DevOps culture, practices, and tools
● Familiar with open-source container orchestration platform e.g. Kubernetes
We are an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to gender, age, race, disability, or any other characteristic protected by law.
معرفی شرکت
ما حاصل ایدهای هستیم که سال ۹۵ شکل گرفت؛ «بیمیتو» که چند سال بعد با «ازکی» ادغام شد و با اسم Azki.com به حرکت خودش ادامه داد. شروع تاریخ ما از دنیای «مقایسه و خرید آنلاین بیمه بوده و صنعت بیمه رو هم سالها پیش دریانوردها استارت زدن. ما در ازکی شبیه دریانوردهایی هستیم که صاحب کشتیچههای خودمونیم و با همدیگه کشتی بزرگتری رو دنبال خودمون میکشونیم. این کشتی بزرگ و غولپیکر یعنی ازکی!
کار ما اینه که به آدما کمک کنیم که با مقایسه کردن بهترین تصمیم رو بگیرن.
ما با هم داریم یک بیزینس رو حرکت میدیم و این فرصت رو داریم که اینجا ایدههامون رو اجرا کنیم، با کنجکاوی توی ازکی گشت و گذار کنیم، تکههای این کشتی بزرگ و کشتی خودمون رو با تخصصمون بهتر کنیم یا از نو بسازیم.
ما میخوایم چیزی بسازیم که کاربرمون بگه:
«چه چیز خفن و باحالی!»