Work Experience
DevOps engineer - Penki Continental Lithuanian Bank (Penki Continental Lithuanian Bank, Software for ATMs)
Duration: 04-2024 - till now
Summary:- Developed and maintained a hybrid cloud infrastructure supporting critical banking services
- Focused on ensuring system reliability, scalability, and security across multiple environments
- Penki Continental is a Lithuanian IT company providing software solutions for banking and financial sectors
Responsibilities:- Designed and deployed scalable Kubernetes clusters on GKE, ensuring high availability and optimized resource utilization for .NET microservices.
- Automated CI/CD pipelines with GitLab CI and Bamboo, reducing deployment time by 60% and improving release reliability.
- Migrated legacy applications to containerized environments on Kubernetes, modernizing .NET workloads and improving maintainability.
- Implemented hybrid infrastructure solutions by integrating VMware and Hyper-V environments with GCP, optimizing cost and performance.
- Established monitoring and alerting for Kubernetes workloads, enabling proactive issue resolution and reducing downtime.
- Developed Infrastructure as Code templates for consistent provisioning of cloud and on-premises resources across GCP and virtualization platforms.
Technologies: GCP, GKE, Kubernetes, VMware, Hyper-V, .NET, Java 21, GitLab CI, Bamboo, Prometheus, Grafana, Zabbix, Docker, PowerShell, Bash
DevOps engineer (Startup company in the private business jets domain)
Duration: 04-2023 - 05-2024
Summary:- Designed and maintained a highly available, scalable, and secure cloud infrastructure across AWS and Vultr
- Focused on automation, performance optimization, and seamless integration of monitoring and CI/CD processes for a startup in the private business jets domain
Responsibilities:- Managed AWS and Vultr infrastructures, ensuring high availability and scalability.
- Orchestrated Terraform modules for infrastructure provisioning, enhancing automation and efficiency.
- Configured robust monitoring systems, facilitating early issue detection and proactive response.
- Optimized CI/CD pipelines in Jenkins, streamlining software delivery processes.
- Designed fault-tolerant systems, implementing failover strategies for uninterrupted application availability.
- Collaborated with international development and QA teams, contributing to architecture discussions and ensuring smooth deployments.
- Maintained and fine-tuned CentOS, Rocky, and Red Hat servers, ensuring optimal performance and security.
- Developed automation scripts in Bash, improving operational workflows.
- Implemented and managed SaltStack for efficient server management.
- Oversaw maintenance and performance of database servers.
- Utilized Docker for containerization, enhancing application deployment and management.
Technologies: AWS, Vultr, Jenkins, SaltStack, Apache, Nginx, HAProxy, Linux, Bash, Prometheus, Grafana, ELK, Java, Tomcat, Maven, Groovy, Docker, Docker-Compose, Git, Postgres, MongoDB, Redis
Team lead of 7 DevOps engineers (AWS Cloud Infrastructure Design and Automation)
Duration: 05-2022 - 04-2023
Summary:- Led a DevOps team responsible for designing, automating, and maintaining cloud infrastructure on AWS
- Focused on improving scalability, observability, and team performance through modern DevOps practices and infrastructure automation
Responsibilities:- Coordinated the work of a 7-member DevOps team, managing priorities and task distribution.
- Conducted technical interviews and onboarding for new engineers.
- Developed personal growth plans and provided mentorship to team members.
- Designed and deployed AWS infrastructure using Terraform and EKS.
- Created infrastructure diagrams and maintained up-to-date architecture documentation.
- Built CI/CD pipelines in Bitbucket CI, ensuring automated, reliable deployments.
- Set up monitoring and logging with Prometheus, Grafana, and Loki for proactive issue detection.
- Implemented notification and alerting systems integrated with CloudWatch and SNS.
- Collaborated closely with Development and QA teams to ensure smooth release cycles.
- Delivered presentations and knowledge-sharing sessions as a meetup speaker.
Technologies: AWS (EKS, DocumentDB, ElastiCache, SNS, CloudWatch, ALB, Route 53, ACM), Terraform, Bitbucket CI, Prometheus, Grafana, Loki
DevOps Engineer (Cloud Infrastructure Modernization for International Electrical Engineering Company)
Duration: 01-2021 - 05-2022
Summary:- Modernized cloud infrastructure for an international electrical engineering company
- Focused on automation, migration to AWS, and optimizing CI/CD workflows to enhance system reliability and operational efficiency
Responsibilities:- Created and enhanced Terraform components for scalable infrastructure deployment.
- Provisioned and configured servers using Ansible playbooks, improving deployment consistency.
- Troubleshot and maintained existing cloud and on-premise infrastructure, ensuring high availability.
- Supported and extended CI/CD pipelines in Azure DevOps to streamline software delivery.
- Collaborated with Development and QA teams to ensure reliable integration and smooth releases.
- Led migration of applications from on-premise environments to AWS, improving scalability and resilience.
- Implemented monitoring and alerting solutions with Prometheus and Grafana for proactive issue detection.
Technologies: AWS, Terraform, Ansible, Bash, Linux, MySQL, PostgreSQL, MongoDB, Java, Azure DevOps, Prometheus, Grafana
DevOps Engineer (E-commerce Infrastructure Modernization and Automation)
Duration: 09-2020 - 12-2020
Summary:- Contributed to the modernization and automation of e-commerce infrastructure for a software company providing Magento- and WordPress-based solutions
- Focused on improving deployment workflows, monitoring, and migration to cloud infrastructure
Responsibilities:- Supported, upgraded, and deployed web applications based on Magento and other platforms.
- Designed and implemented CI/CD pipelines in Jenkins to automate build and deployment processes.
- Migrated infrastructure from on-premise machines to AWS, enhancing scalability and reliability.
- Managed and optimized virtualized environments using Proxmox.
- Configured and maintained monitoring systems with Zabbix and New Relic for real-time performance tracking.
- Conducted log analysis and troubleshooting to ensure high application availability and fast issue resolution.
- Implemented load balancing with HAProxy to improve system performance and fault tolerance.
Technologies: Jenkins, PHP, Proxmox, Zabbix, New Relic, AWS, Ansible, HAProxy, Magento
DevOps Engineer (Cloud and On-premise Infrastructure Automation for Software Outsourcing Company)
Duration: 10-2019 - 09-2020
Summary:- Worked on cloud and on-premise infrastructure automation for an international software outsourcing company
- Focused on improving operational efficiency, system reliability, and monitoring practices across multiple environments
Responsibilities:- Automated routine infrastructure processes to improve operational efficiency.
- Maintained and administered Linux-based servers, ensuring system stability and performance.
- Deployed and upgraded applications, performing log analysis and troubleshooting issues.
- Configured and maintained Zabbix monitoring for proactive system alerting.
- Collaborated with development teams to support continuous integration and deployment pipelines using Jenkins and TeamCity.
- Managed virtualized environments with Vagrant to standardize development and testing setups.
Technologies: AWS, PHP, Java, Jenkins, Bash, Linux, TeamCity, Vagrant, Zabbix
DevOps / Big Data Engineer (Cloud-based Data Processing and ETL Infrastructure for Big Data Integrator)
Duration: 01-2019 - 06-2019
Summary:- Worked on cloud-based data processing and ETL infrastructure for a big data integrator company
- Focused on automating, maintaining, and optimizing production systems to ensure high availability, performance, and scalability
Responsibilities:- Deployed, automated, and managed cloud-based production systems to ensure reliability and security.
- Conducted system troubleshooting and problem-solving across platform and application domains.
- Created a data lake by extracting customer data from various sources (Teradata, CSV) into HDFS.
- Used Apache Hive to run MapReduce jobs on HDFS datasets for efficient data processing.
- Employed Apache Sqoop to transfer data to cloud storage environments.
- Developed automation scripts in Bash to streamline ETL and data management processes.
Technologies: Hadoop, HDFS, MapReduce, Bash, Hive, Spark, Cloudera, Java, Sqoop, Oracle DB
Education