Hothead Games is 200 people that make free-to-play mobile games, with offices in Vancouver, British Columbia and Halifax, Nova Scotia. As one of the world’s top publishers in the first person shooter and sports genres, we move fast, do amazing work and have a lot of fun doing it.
We have a data-driven culture at Hothead and support this with a strategic focus on building world-class analytics tools and technology. We are seeking individuals who are passionate about data analytics and data engineering. In this role, you will be instrumental in developing and maintaining the core big-data ETL pipeline. You will also have the opportunity to work with data analytics tools, machine learning frameworks, systems for running A/B tests and other experiments, and dynamic sales offer systems.
As the perfect candidate for the position, you have solid programming skills and experience working with large data sets. You are someone who is passionate about developing your skills in area of data engineering and data analytics.
You like to take responsibility and enjoy building solutions that are maintainable, robust and scalable. You enjoy working collaboratively with customers and team members to come up with innovative and effective solutions.
Duties & Responsibilities:
- Develop and design Hadoop ETL pipeline code in Python that transforms data from live games.
- Provide support to game engineers and work closely with them to promote best practices for game instrumentation.
- Validate data warehouse table schema and production data is transformed as expected.
- Research and develop pipelines and data tools for internal customers (marketing, finance, and game development teams).
- Integrate data from multiple third party data sources using REST APIs and other mechanisms.
- Provide support to data scientists and data analysts.
- Develop and maintain unit tests, associated test data and automation utilities that ensure the pipeline is robust.
- Deploy updated pipeline code to AWS. Monitor and troubleshoot failures with the assistance of senior engineers.
- Assess and develop automation steps for deployment of pipeline updates to AWS.
- A bachelor's degree, in computer science, computer systems engineering, software engineering or mathematics.
- Experience in programming language is required.
- Proficiency in one of Python, Java, or Scala.
- Proficiency in SQL.
- Working knowledge of modern Big Data frameworks and tools for large-scale, distributed data processing, data warehousing and data analysis.
- Strong in critical-thinking skills and problem solving.
- Experience working with large data sets and multiple types of databases.
- Familiarity with map-reduce paradigms and related systems like Hadoop and Spark.
- Familiarity with Amazon Web Services cloud computing platform.
- Familiarity with devops practices such as continuous integration and test-driven development.
- Ability to communicate complicated concepts clearly and concisely.
- Experience with NoSQL databases such as DynamoDB, Redis and CouchDB, and columnar store databases like Redshift and Vertica.
- Experience with container technology such as Docker or Kubernetes.
- Experience with streaming event systems like Kinesis or Kafka.
- Knowledge of advanced data science concepts such as predictive analytics, machine learning and data mining.
- Familiarity with test automation systems and continuous integration systems (e.g. Jenkins).
- Familiarity with free-to-play game design concepts and industry-standard KPIs used to measure game performance.
This is for a permanent full time position in our Vancouver office. Located at 1555 West Pender Street, Vancouver, BC, V6C 2T1.
We offer a great compensation package that includes your salary, vacation, extended health, dental and vision care benefits. Some other nice to have’s include an in-house gym, RRSP Matching, weekly lunch and learn’s, major/minor league sports event, summer BBQs, snacks, beer and amazing people to work with!