Data Engineer job in Fitzroy
Vacancy has expired
Join a company you can be proud to work for, doing work you can be proud of – from building out data pipelines to diving into cutting-edge projects.
IN A NUTSHELL...
At Bellroy, we believe in balance. Art with science. Design with data. Intuition with technology. The work our data team does is just as vital to supporting our business as any other. As a Bellroy Data Engineer, you’ll be responsible for creating and managing an effective data pipeline architecture, and running that alongside more cutting-edge projects we’d love to kick start when you get here.
Working with our senior analyst, our developers and our sysadmins, you’ll put into place the infrastructure required to extract, transform and load data from a variety of sources. And you’ll use your sharp logic to improve internal processes and automate systems, optimise delivery and enhance outcomes. Once that’s all sorted, you’ll look ahead to longer-term optimisation of our systems and processes, as well as some machine learning initiatives and other data team projects. All while, on a daily basis, supporting ad hoc analyses and data requests from all over the business.
That was all a bit of a mouthful, but if you read that with understanding and enthusiasm, this could be the job for you. If you bring your industry experience and detail-oriented brain to help us, we’ll offer a world-class team to learn from, the tools you need to do your thing, and the support you need to flourish.
YOU COULD BE THE ONE, IF...
Your logic and reasoning is formidable, as is your determination. You are ordered and decisive, yet imaginative at the same time. You like things to end up fitting neatly in the boxes; but you don’t mind looking outside of them in the first place to find the answers. You get a kick out of getting things right, and you have the patience to do that every time. Because you present your work with pride, knowing you’ve done all you can to make sure it’s correct – and useful.
Disorder makes you uncomfortable. With symmetry, organization and a clean database schema, life is good. But you love a challenge more than most other things, so where you see inconsistency or confusion, you also see an opportunity. And you’ll follow that thread until you’re confident you’ve sorted it out.
You might have been called a “bookworm” more than once in your life, because your curiosity has you forever seeking (and absorbing) information. And nowadays, this natural thirst for knowledge has you constantly looking at ways to improve, optimize and enhance – yourself, your processes, and your data. A bit like a mechanic will with a car, you’ll break apart the whole into pieces you can check and analyse, before rebuilding an idea, process or system into something better.
Sound familiar? If so, we’d love to meet you (and that brain of yours).
IF YOU WERE HERE LAST WEEK YOU MIGHT HAVE:
- Reviewed our overall data systems (supported by very competent sysadmins) to make sure everything was in order and to look for larger scale improvements.
- Built out a handful of new pipelines to bring more of our core business data into our data warehouse.
- Chased a handful of data validation alerts raised by our pipelines, and taken the time to get to the root cause of each of them, then either delegated the fix to an appropriate someone else or fixed them yourself.
- Suggested an improvement to our A/B testing infrastructure.
- Worked outside of data team, with our developers, flexing your database and query optimisation skills to decide whether to fix a performance issue they’re having at the database level, or insist that the fix should be in the code (and, that’s fun - they’re an excellent bunch).
- Provided an ad-hoc analysis (working with our analyst) to someone who requested it, integrating a one-off data source.
- Talked with our CIO about some of our mid-term plans, and how we’ll support them with data.
THERE ARE SOME QUALITIES YOU MUST POSSESS...
- At least three years experience in data-related roles.
- Advanced working knowledge of SQL and experience in ETL using a workflow management tool such as Airflow.
- Experience with building and optimising data pipelines.
- Experience with collecting data from a variety of sources including APIs (good APIs, bad APIs, and ugly APIs).
- Strong analytical skills and an ability to perform root cause analysis.
- Training in Computer Science, Statistics, Informatics, Information Systems or another relevant quantitative field (or demonstrable skill in one of those areas and the story of how you built that skill without a degree).
- Very high precision – you need to know how to verify that your work is correct.
- Bonus points for more experience with relevant programming languages (ie. Python, R, Scala), project management and machine learning.
LOCATION AND HOURS
This is a full-time role based in our Fitzroy office.
Have a good look around our website, read our story and get a feel for who we are. If you think you’d be a good fit, apply via this link: http://grnh.se/4cavj81 (please copy and paste URL into your web browser) or the careers page on bellroy.com with a cover letter, resume and answers to the question below.
(Please spend no more than 30 minutes on this question – we’re looking for an idea of how you’d approach questions like this, how you think, and the sorts of things you’ll notice. We do want to respect your time, and we don’t want you to spend hours on this.)
For the question below see the sample db at https://github.com/tricycle/data_engineer_application, and the README file in that repo.
- Given the 4 tables in the SQLite3 db bellroy_question_1.sqlite3 in the repo (invoices, invoice_lines, products and orders) please write a SQL SELECT statement to prepare a dataset for a sales report returning columns month, style_code, color_name, and revenue. Please also comment freely on our schema and anything else you noticed. (We mention this because we suspect that if you're the candidate we're looking for, you're at least a little horrified by something in this database.) More details in the repo.