Senior DevOps Engineer (Helsinki)
Description
HelloWork builds next generation Talent Operations practices and helps companies build teams. We're looking for a Senior DevOps Engineer to join one of our client teams in Helsinki.
By applying to our positions we will invite you for a meet & greet with one of our experts to discuss in more detail the fit of role for you. After the discussion we will schedule a discussion between the hiring party. No strings attached.
You'll be building and maintaining the infrastructure that keeps applications running smoothly and data flowing reliably. This means setting up automated deployments, monitoring systems that catch problems before users notice them, and creating infrastructure that scales with business needs. You'll work with both application infrastructure and data systems, making sure everything from web apps to data pipelines runs without hiccups.
What you'll do
• Build and maintain CI/CD pipelines that automatically test, build, and deploy applications safely to production
• Design and manage cloud infrastructure using Infrastructure as Code (Terraform, CloudFormation, or similar)
• Set up monitoring, logging, and alerting systems that help teams catch and fix issues quickly
• Manage containerized applications using Docker and Kubernetes, ensuring they scale automatically based on demand
• Work with data infrastructure – setting up databases, data pipelines, and ensuring data systems are reliable and performant
• Automate repetitive tasks and processes to reduce manual work and human error
• Collaborate with development teams to improve deployment processes and system reliability
• Implement security best practices across infrastructure and data systems
What we're looking for
• 5+ years of experience in DevOps, infrastructure engineering, or system administration
• Strong experience with cloud platforms (AWS, Azure, or GCP) and their core services
• Knowledge of Infrastructure as Code tools
• Experience with containerization and orchestration
• Understanding of CI/CD pipelines and tools
• Experience with monitoring and logging tools (Prometheus or similar)
• Understanding of data pipeline tools and concepts (Apache Airflow, data warehousing basics)
Nice to have
• Experience with data engineering tools (Apache Kafka, Spark, or data lake technologies)
• Knowledge of database performance tuning and data modeling
• Familiarity with data warehouse solutions (Snowflake, Redshift, BigQuery)
• Understanding of security scanning and compliance automation
• Knowledge of cost optimization strategies for cloud infrastructure
• Experience with multi-cloud or hybrid cloud environments
See yourself here? We'd love to hear from you.
Want to hear about new opportunities?
In our newsletter, we share our latest insight regarding the talent market as well as our latest vacancies.
Join our newsletter for updates.