.01

ABOUT

PERSONAL DETAILS
186A Rivervale Drive, Singapore 541186
mapiconimg
(+65)8788-9709
Hi, I worked as linux systems administrator and devops engineer. I am passionate about open source, linux based technologies, cloud computing and especially devops automation. Welcome to my Personal Resume Website

BIO

ABOUT ME

Linux geek and practitioner with more than 10 years of extensive professional experience in running high traffic sites with more than 200k daily active users (or up to 40k hourly active users). Other than that, I also have expertise in building Linux based systems that are highly available, cost efficient, fault tolerant, and scalable.

Proven track record in starting Devops initiative and implementing all Devops related tool such as Jenkins, Puppet and Ansible in two of my previous companies. Nowadays I try to automate pretty much everything, such as servers provisioning, build, application deployment in production or even disaster recovery site, service installation/configuration by using Jenkins, Puppet and Ansible.

Strong expertise and hands on experience on Linux RedHat/CentOS. That includes but not limited to service and system configuration/tuning of linux based services, such as Apache, Nginx, MySQL/Percona, Postgresql, memcached, percona and linux operating system tuning and hardening. Additionally, i also have few years of working experience in MySQL database administration and develop technical competency in queries optimization and have written several queries for reporting as well.

My own self quotes are “passion” and “lifelong learning”. I have a deep passion for the latest technologies and additionally I strongly believe in order to stay on top of IT industries that keep evolving, I have to keep learning what are the latest trends out there. And subsequently that is also why I got a high self-enthusiasm when it comes to learning especially if it is related to automation, container and cloud- based technologies.

.02

RESUME

I can know a lot, but my experience shows the best of me!

TECHNICAL COMPETENCE

1. Deep knowledge and hands on proficiency in linux operating systems (CentOS, Red Hat, etc)

2. Extensive skills and knowledge in design, install, and configure infrastructure to be highly available and can support more than millions of active users

3. Experienced in analyzing, troubleshooting, and providing solutions for all technical related issues

4. Experienced in different type of linux based file system administration – NFS, Samba, LVM, Glusterfs, DRBD

5. Experienced in server and hardware build such as cpu, memory and RAID configuration

6. Working knowledge on database tuning and performance, such as indexing, analyze slow query and also optimize the query at the same time

7. Familiar with linux based clustering service such as Pacemaker and RHCS (Red Hat Cluster Suite)

8. Understanding and strong knowledge of other infrastructure stack – network, database, virtualization

9. Strong background in automation and deployment technologies by using shell scripting, jenkins and puppet

10. Good working knowledge of storage infrastructure including SAN, NAS, iscsi, and zoning configuration on SAN switch

11. Deep understanding on enterprise network design and setup such as load balancer, router, firewall and switch configuration

12. Deep understanding and working experience in cloud based infrastructures such as AWS, Rackspace, Digitalocean

13. Ability to work with minimal supervision, making decisions based upon priorities and schedule, team and target oriented, ability to understand business initiatives and work independently

14. Strong analytical and reasoning abilities. Mind-set that enables solving complex problems in a fast-paced environment while delivering on service promises. Able to develop and adapt business processes after evaluating multiple solutions

PROFESSIONAL CERTIFICATION
  • 8/2020
    Singapore

    Certified Kubernetes Administrator (CKA)

    Linux Foundation

    Certification ID #LF-ffdpxvxg4o

    Verify here
  • 8/2020
    Singapore

    Certified Kubernetes Application Developer (CKAD)

    Linux Foundation

    Certification ID #LF-nozii7siw9

    Verify here
  • 12/2013
    Jakarta, Indonesia

    Red Hat Certified Engineer (RHCE)

    Red Hat

    Certification ID #130-208-671

    Verify here
  • 11/2013
    Jakarta, Indonesia

    Red Hat Certified System Administrator (RHCSA)

    Red Hat

    Certification ID #130-208-671

    Verify here
  • Jakarta, Indonesia

    AWS Certified Solutions Architect - Associate Level

    Amazon Web Services

    Certification ID #AWS-ASA-5891

    Verify here
EDUCATION
  • 01/2001
    01/2007
    JAKARTA, INDONESIA

    ELECTRICAL ENGINEERING

    TARUMANAGARA UNIVERSITY

    Focusing in subdisciplines: "Computer Systems Engineering"

    GPA: 2.80

JOBS AND EXPERIENCE
  • 05/2018
    Present
    SINGAPORE

    DEVOPS LEAD

    Incube8 Pte Ltd

    * Continuously looking to improve the efficiency of engineering and IT by constantly automating manual tasks or researching for any latest tools

    * On-call support for incident response and incident management

    * Oversee and actively involved in the planning and execution of the applications migrations to AWS Cloud

    * Lead, train and groom junior DevOps engineers in order for them to grow and capable of maintaining and supporting internal CI/CD tools

    * Actively collaborating and working with cross functional team such as developers, QA, data, security or even product team

  • 03/2017
    04/2018
    SINGAPORE

    SENIOR CONSULTANT(DEVOPS)

    Network for Electronic Transfers Pte Ltd

    * Worked as part of the new Devops team to bring more initiatives into running infra as code, either in server provisioning, build and application deployment

  • 06/2015
    02/2017
    SINGAPORE

    SENIOR LINUX SYSTEMS ADMINISTRATOR

    Incube8 Pte Ltd

    * Provide additional architect and technical guidance to software engineer in order to optimize their existing application and database performance

    * Transformed and revised most of manual administration, application deployment and server configuration to be automated by using shell script, jenkins and puppet automation

  • 03/2014
    06/2015
    JAKARTA, INDONESIA

    IT Assistant MANAGER, IT INFRASTRUCTURE

    PT. Chubb Life Assurance Indonesia

    * Manage, troubleshoot, deploy and delivered all aspects of Java based applications server

    * Provides designs, solutions and optimizations for any infrastructure related projects

  • 02/2013
    02/2014
    JAKARTA, INDONESIA

    SYSTEMS ENGINEER

    PT. NTT Data Indonesia

    * Manage and assist in the design, planning, implementation and support of all centralized infrastructure solutions and projects, ensure the maximum availability of business systems is delivered in line with appropriate SLAs

    * Research, evaluate, design, implement and maintain technical solutions

  • 10/2008
    12/2012
    JAKARTA, INDONESIA

    IT Assistant Manager

    PT. Andaman Lestari Multikreasi

    * Provides consultation, solution and support to all client company

    * Control and manage the IT department on day to day basis, supervising full time employees from system and developer team

  • 4/2007
    10/2008
    JAKARTA, INDONESIA

    Systems Engineer

    PT. Sinarmas Multifinance

    * Troubleshooting of linux and windows based application related problem

    * Support helpdesk on troubleshooting any problem from user

.03

SKILLS

My skills have enabled me to achieve great results!
SYSTEM ADMINISTRATION SKILLS
Linux/Unix >
LEVEL : Advanced EXPERIENCE : 9 YEARS
Red Hat CentOS Ubuntu AIX Troubleshooting
Servers Virtualization >
LEVEL : Advanced EXPERIENCE : 6 YEARS
VMware ESXi Xenserver Proxmox
High Availability & Clustering >
LEVEL : ADVANCED EXPERIENCE : 5 YEARS
Load Balancer Pacemaker/Corosync Red Hat Cluster Suite HAProxy Heartbeat
Web Servers Administration >
LEVEL : ADVANCED EXPERIENCE : 8 YEARS
Apache 2.2/2.3/2.4 Nginx 1.8/1.9/1.10 php-fpm/mod_php
Monitoring >
LEVEL : ADVANCED EXPERIENCE : 7YEARS
Nagios Cacti Zabbix Kibana SNMP
Database Servers Administration >
LEVEL : INTERMEDIATE EXPERIENCE : 5 YEARS
MySQL Percona Postgresql Oracle Optimize Query Indexing
Storage Administration >
LEVEL : INTERMEDIATE EXPERIENCE : 6 YEARS
iSCSI LVM Zoning NAS SAN
Networking >
LEVEL : INTERMEDIATE EXPERIENCE : 8 YEARS
Routing Firewall Router Switch VLAN
PROGRAMMING SKILLS
Scripting >
LEVEL : ADVANCED EXPERIENCE : 6 YEARS
Bash/Shell Deployment Script Python
Automation >
LEVEL : INTERMEDIATE EXPERIENCE : 3 YEARS
Jenkins Puppet Ansible
.04

PORTFOLIO

MY PORTFOLIO

AUTOMATED JENKINS DEPLOYMENT NOTIFICATIONS

AUTOMATED JENKINS DEPLOYMENT NOTIFICATIONS

About The Project

Previously we already had Jenkins automated deployment in place, but the manual and repetitive process of having to notify or inform our QA team to start their post-deployment verifications test suite became a monotonous task for us. So we were being challenged to come up with a solution that would automate the deployment notifications and also the verifications report as well.

Since we realized that this kind of tasks would not be possible to be accomplished on our own, so we(the DevOps team) were working closely with QA and Dev team on this as part of our quarterly goal. The QA team will be in charge of writing the scripts to aggregate the result from all the post-deployments checks job meanwhile DevOps will handle the tasks of writing the slack notifications scripts, google pagespeed checks scripts, WASP crawler scripts, and additionally the Jenkins declarative pipeline scripts for the QA post-deployments checks job. As for the Dev, they provided us with the commands that we can run and check whether there are any payments issues that might occur shortly after we released the new code to production.

And finally, after running into lots of trial and error for almost 2 months, finally we were able to complete our goals, and although it might look really challenging, we are really satisfied that we are able to achieve that difficult tasks.

Additionally, this is what the jenkins pipeline looks like in our QA team jenkins instances(Yes we had multiple jenkins instances but fortunately, spinnaker is really helpful as it can help to manage 2 or more jenkins instances for us)

And this is the automated post-deployment notifications that we can see in our slack channel

CLOUDDEVOPS

CUSTOM REST API FROM SLACK TO JENKINS

CUSTOM REST API FROM SLACK TO JENKINS

About The Project

The sole reason that this slack to Jenkins middleware was created because we noticed there are some limitations of slack’s slash commands:

  • Slash commands require the endpoint to respond within 3 seconds or otherwise the slash command will return a timeout error
  • There is no built-in authorizations method that we can use to limit who can trigger the slash commands. Some of the slash commands are intended to initiate deployment or making some configuration changes to prod related environments
  • No logging of the slash commands and payload being used to trigger the job
  • Additionally, also we would like to be notified either in a specific slack channel or direct message if a specific Jenkins job that was triggered by slash commands was completed successfully or with some errors

So based on the requirement above, I decided to take on the challenges and try writing a REST API that will be used to accept the payload from slack and initiate the Jenkins job based on the given payload. Since I am much more familiar with Python, I decided to use the Django framework to accomplish this task. Anyway, for this project, I decided to just name this tool as slack2jenkins(yeah it might sound so simple, but I am really not good when it comes to naming). Here are the details of all the library and tools:

  • Docker. I used docker-compose to define the 3 containers required for the service:
    • Slack2jenkins which is the main container to host and run the REST API
    • Slack2jenkins-worker. this container is running the celery worker that will fetch the queued tasks from Redis and process the payload
    • Redis. Key-value store server that is used to store cache data and also all asynchronous related payload and data
  • Python 3.7
  • Django framework
  • Celery as the python asynchronous task worker
  • Grappeli for the admin dashboard
  • MySQL database server as the db server to store all the logs, data and configs
  • Redis as key-value storage. It is being used to store some cache data and additionally the payload or data for asynchronous tasks
  • Some additional python library such as jenkinsapi, redis, mysqlclient, and slackweb

After spending a few weeks on some trial and error and also after getting a few feedbacks from the team, I managed to get it works perfectly. .  So to simply explain the flow, this is how slack2jenkins works:

  1. Accept the payload from slack, check whether the payload token and jobs match with the token and data defined in the database.
  2. If it does not match, then it will reply with status.HTTP_401_UNAUTHORIZED. If it matches, then it will proceed to store the payload data inside Redis and respond with HTTP 200 immediately.
  3. Celery worker will pick up the payload data from the queue and based on the payload, it will try to get the jenkins job and check whether the specific jobs are restricted.
  4. If it’s a restricted job, then it will check on ‘Authorized Users’ table and decide whether the user that triggers the command does have the authorization to do so. If the user is not authorized, then it will notify the users directly on slack that he/she doesn’t have the permission to execute the command.
  5. If the user is authorized, then it will continue to run the Jenkins build and notify in the slack channel that a jenkins build has been initiated
  6. Slack2jenkins will continue to poll and wait until the job is completed. If it’s completed, then it will notify the user and also in the channel whether the job is completed successfully or with some errors found during the build.

Here are some screenshots for the slack2jenkins projects for my personal documentation:

 

 

 

 

 

 

 

 

INFRASTRUCTURE

PHP 7 PERFORMANCE IMPROVEMENTS

PHP 7 PERFORMANCE IMPROVEMENTS

About The Project

This projects occured 9 months after i have completed SAv3 application migration. During that 9 months, the dev team have introduced a lot of new features on every sprint. Subsequently, the new feature used up more resources thus increased the load average across the web servers and also reduced response time and throughput for this site.

 

Realizing that thing could go even worse if we kept introducing new features without researching more into how we can reduce the impact for new requirements/specs, then me and the dev team are working together on how we can reduce the load. Here are some of the changes that we did:

  • Upgrading operating system version to CentOS 7.3 from CentOS 6.7
  • Change php version to PHP 7.0 from PHP 5.6 which dramatically reduce the load average on web servers
  • Another major performance improvement that was suggested by one of the dev was enabling PHP opcache

 

 

The graph above illustrate the improvement after we migrate to php 7. Based on the result above, the load average goes between 5 to 6 during the peak hours after we migrate to PHP 7. But after we enabled PHP opcache on each webservers, the load average went down to roughly around 1 load average across each web servers.

So to summarize, here are the performance graph before and after we completed the migration to centos7/php7 with opcache enabled.

 

DEVOPS

ANSIBLE AUTOMATION & DEPLOYMENT

ANSIBLE AUTOMATION & DEPLOYMENT

About The Project

Migrate over all our server build and deployment script to use ansible. Paired with Jenkins, both of this tools are now mainly used for our internal testing, deployment and automations

  • Automated server builds and installation. This ansible script will install all the required package for LEMP stack installation
  • Customized application deployments for laravel PHP framework. Mainly used for production deployment to support 8 serial deployments at the same time including disabling or enabling load balancer so there is no downtime at all during deployment
  • Ansible applications deployment will be triggered and logged by Jenkins, so any code changes can be easily traced from jenkins
DEVOPSINFRASTRUCTURE

ELASTICSEARCH PROJECT

ELASTICSEARCH PROJECT

About The Project

Working together with the software developer team to explore the possibility of using elasticsearch to replace the currently used sphinxsearch software in our production environment

  • Node installations and setup was done through ansible playbook, so adding new nodes would be simply just to add the new node name to the elasticsearch inventory group. This elasticsearch roles is available on github
  • By using jprante elasticsearch-jdbc and our custom sql script, we managed to import all of our sites user profiles into elasticsearch
  • Configure 5 nodes elasticsearch cluster which have 10 shards and 3 replicas for each of the primary shards. By doing this, we can achieve high availability and also distribute the load to multiple servers
  • Configure load balancer to distribute the load equally to all the nodes in elasticsearch cluster. Health check is configured to monitor elasticsearch running on port 9200, so in case of one of the node failure, our load balacer will exclude the failed node and can still work without any issue at all
  • Setup config switch in our apps so we can easily switch between using sphinxsearch or elasticsearch. By doing this, we have been able to solve and debug the issue related to applications without any significant downtime on our sites
HIGH-AVAILABILITYINFRASTRUCTURE

SAV3 APPLICATION MIGRATION

SAV3 APPLICATION MIGRATION

About The Project

Website with more than one millions of active user in one day. It was based on php code, but it was a lot outdated and the main problem comes from that it’s really difficult for the developer to make any changes as the code base is really messy. Although it has been running stable for more than 10 years, we decided it was time to migrate it to newer framework which would provide better enhancement and scalability in the future

  • Works together with DBA, and developers to solve complex database migration from previous version to new version which requires complex understanding of each tables and the logic on how to map it to the new version which had totally different database design
  • Find out and advice better functionality to developer team such as using selective read/write for database
  • Provide better and secure ways of storing encryption key rather than storing it inside the applications itself
  • Time planning for database migration which include pre migration and delta migration to minimize the down time
  • Web server setup using nginx and php-fpm stack. This includes optimizing the number of workers in nginx and php-fpm, operating system tuning and database configuration
DEVOPS

JENKINS AUTOMATED DEPLOYMENT

JENKINS AUTOMATED DEPLOYMENT

About The Project

Jenkins provide better way and easier to automate deployment job either it was in test, staging or even production environment

  • Used extensively by our QA team to run the automation test that will decide whether the recently built applications are stable or not
  • Ease the developers when deploying to test and staging environment
  • Configure Jenkins to support production deployment on multiple web servers by disabling the traffic to that particular server during deployment. So it meant, even on peak hours we can deploy to production servers without any impact to user
  • Production deployment will also trigger the deployment to our disaster recovery site. So in case of emergency, we can switch our site to disaster recovery site without having to deploy latest code base to all the web servers which would take some time
HIGH-AVAILABILITYINFRASTRUCTURE

REDIS SENTINEL

REDIS SENTINEL

About The Project

As i am using redis extensively for my company’s sites caching, queue and session engine, i realized that using single redis could become a disaster in case the redis server crashed or unable to boot at all. So i researched on how to design a highly available redis server and the design should be able to support redis failover in which it must be seamless to our users. After i read a lot of blog and community forums, i came up with redis sentinel that could exactly do what i expected

  • 2 Servers running as redis master-slave replication
  • Another 3 servers are used to run redis sentinel. This redis sentinel will be the one that will do all the monitoring on that redis master-slave servers
  • In event of failover, redis sentinel will automatically bring up the slave as the new master redis and set the previous master as the slave for the new master redis
  • Write a shell script to automatically configure kemp load balancer to use the new master redis in event of failover
  • Setup all the application config properties to use the load balancer ip as the redis host rather than going to redis server directly. By using this method, combined with the failover shell script, there is no need to change anything on application config and it can work flawlessly

SPHINX SEARCH OPTIMIZATION

SPHINX SEARCH OPTIMIZATION

About The Project

In this project, we worked together with the consultant from sphinx search who is more experienced on how to fine tune and optimize our current sphinx infrastructure. We are using sphinx extensively for our sites search engine and that includes text search, username search, profile, sex etc

  • Purchase 4 new servers for sphinx infrastructure. Each of this servers has 48 cores and it was  because sphinx search is using cpu more extensively
  • Configure distributed sphinx search. Previously, on every single query, it will only utilize 1 core. But by using distributed searching, 1 search request can be distributed across multiple cores thus reduce the search time even more. For example, without distributed search, the query will take 2 seconds to finish. But after we switched to distributed search across 12 cores, the query time is reduced to 0.1 seconds so it really improve our sites user experience even more
MONITORING

KIBANA MONITORING

KIBANA MONITORING

About The Project

Imagine if you are working on cluster of more than 100 servers and you had no idea of what kind of error log that is going on with the servers.  So this was the solution that i designed and setup in order to give the developers team more insight into our laravel application that is running on production servers without giving them direct access to that production server itself. Other than that, it would be much easier for me to gather all system log from servers in one single dashboard.

  • Filebeat was installed on each server automatically using puppet. and also, it already had the defined config file on which file that should be forwarded to the logstash server
  • Logstash is the data collection engine that will receive all the log file being forwarded by filebeat. All the data and log are stored inside elasticsearch so it will be easier for searching and indexing purpose
  • Kibana is the dashboard that will connect to elasticsearch and show all the log files stored inside. It provide a lot of functionality as search engine so you can find any specific keyword or criteria that you define

JASPER REPORTING

JASPER REPORTING

About The Project

Working in a startup company, usually mean that you have to wear many hats at the same time. It’s the same thing that also happened to me. I was a system administrator, devops, database administrator and also a bit of support at the same time. Regularly once a month, or couple of weeks, the marketing team will ask my help to extract some reporting from our database. Mostly it was a repetitive task such as number of user sign up, revenue reporting, specific region/country based member statistics etc. So rather than having them troubled me each time for same task, i thought of using some tool to generate the report and have them login and view the report directly

  • Setup and install jaspersoft open source which is a complete BI suite meant for creating custom reporting
  • Create reporting template using jaspersoft studio
  • Write a complex query to get all the statistics and revenue report for current year, month or even defined date
  • Create some documentation on using jasper report, so the marketing team can see the report any time they want to. This report will run its query on slave server so it won’t affect production performance at all
HIGH-AVAILABILITYINFRASTRUCTURE

DISASTER RECOVERY SITE

DISASTER RECOVERY SITE

About The Project

This is the first project in the first week when I joined Incube8. This data center will be used to handle some traffic for our Asia website and also serve as the disaster recovery site for our main production site in Switch Data Center, Las Vegas.

  • Manage all the hardware procurement for the new data center, such as firewalls, switches, load balancers and servers. This includes liasing and managing vendor for the hardware delivery and installation on our new data center
  • Finds a suitable, reliable data centers that could fulfill our required SLA
  • Highly availability system and network design by using redundant devices such as pair of firewall, switches and load balancer in order to mitigate hardware failure that might disrupt the service
  • Database replication from production site to this disaster recovery database server. All servers have been setup properly and any deployment on production site will also be deployed on our disaster recovery site by using Jenkins
DEVOPS

PUPPET AUTOMATION

PUPPET AUTOMATION

About The Project

Build puppet automation server in order to automate installation and configuration on all linux red hat server

  • Automate installation and configuration of nagios nrpe client, syslog and snmp so it could had centralized config management and provide easier integration with nagios and cacti monitoring tools
  • Automate configuration of all linux server to use local repository so it can save more bandwidth and provide faster package update
  • Other useful automation configuration such as automate configuration for dns resolver, ntp client and also hosts file
HIGH-AVAILABILITYINFRASTRUCTURE

F5 & Pacemaker high availability system

F5 & Pacemaker high availability system

About The Project

This project purpose was to create a reliable internal system by using F5 LTM as load balancer to distribute load to 4 cluster of java based application servers

  • Setup F5 as load balancer to serve the client from internal and public internet, web accelerator and ssl offload for site published to internet
  • Load balancer backup by using linux server installed with haproxy
  • Dual active jboss application server clustering by using pacemaker/cman/corosync as its cluster software, DRBD as its block device and using GFS2 as its cluster file system

MONITORING & BACKUP SYSTEM

MONITORING & BACKUP SYSTEM

About The Project

Setup monitoring tool and backup system for infrastructure servers, switch and also sms notification on any server issue

  • Setup nagios as main monitoring tools and enable monitoring page for easy monitoring by monitoring team
  • Setup cacti for daily resource monitoring by system administrator
  • Setup amanda as backup solution to backup all servers configuration, database and web scripts
  • Integration of internal monitoring system with whatsapp to provide server and service notification alert to a single user or whatsapp group

NSDI PROJECT

NSDI PROJECT

About The Project

NSDI (National Spatial Data Infrastructure) is an Indonesia government project under BIG (Badan Informasi Geospasial). This infrastructure was setup on 10 ministry site and 1 DRC site on Batam

  • Setup oracle database clustering on Red Hat Cluster Suite (RHCS)
  • Oracle database 11g installations on all ministry site
  • Setup haproxy load balancer for application and smtp load balancing and heartbeat for its high availability
  • Design and execute multiple test case to ensure services high availability and continuity
  • Design storage LUN by splitting operating system LUN and data LUN, so it can be replicated to DRC site by using Netapps snapmirror feature

 

CLOUD

AWS CLOUD HOSTING

AWS CLOUD HOSTING

About The Project

Setup initial infrastructure and vpc design for an online travel agency company (OTA) in which they will migrate all their servers from traditional hosting company to Amazon Web Services cloud

  • Design multi layered network architecture by using amazon vpc, so it has web layer, database layer, application layer and management layer
  • 2 availability zone vpc and server design, to prevent disruption in one zone causing all services to be down
  • All administration and developer task can only be done when already connecting to vpn and must be using management layer only
  • Web load balancing by using elastic load balancer
  • Setup auto scaling configuration for web server and reporting server in case of high load condition
  • Daily database, log and web script backup to amazon s3 by using shell script

TACACS CENTRALIZED AUTHENTICATION SERVER

TACACS CENTRALIZED AUTHENTICATION SERVER

About The Project

This solution was provided for a financial company which they have around 80 cisco switches and routers, and the main problem is account management in case they have to add or remove user or setup their privileges

  • Setup linux server for around 80 cisco switch centralized authentication services
  • Centralized account by integrating to Active Directory, so it is using AD username and password
  • Add on feature by limiting account privilege to administrator or operator in order to prevent an unprivileged user to change cisco switch settings
VIRTUALIZATION

VIRTUALIZATION & DATA CENTER COLLABORATION

VIRTUALIZATION & DATA CENTER COLLABORATION

About The Project

Virtualization project by using vmware eSxi and vCenter to handle all virtual servers by using 3 Dell R710, 2 redundant network switch, 2 redundant switch for iSCSI connection, and netapps FAS2040

  • Migrate around 40 servers from physical servers into virtual servers
  • Configure multiple vlan and trunking between 2 datacenters to for vmware migration project
  • Migration includes proper planning, design of infrastructure and network because physical servers was hosted on two different data center and final result will be on one data center only
  • Configure UPS to provide auto shutdown feature to all virtual servers whenever it detects UPS battery is already critical
CLOUDDEVOPSINFRASTRUCTURE

OPRENT INFRASTRUCTURE

OPRENT INFRASTRUCTURE

About The Project

This infrastructure was proposed and designed for a startup company that haven’t got their infrastructure setup properly yet. So being a consultant for their first startup project, i advised them to use AWS(Amazon Web Services) because they can start by using cheaper ec2 instance, and upgrade that instance when they have more traffic coming in

  • Vertical design concept. vertical scalability is the ability to increase the capacity and computing power of existing hardware by giving more cpu, memory and disk space. Most of the time vertical scalability will need to shutdown or restart the existing server so that is also the reason for horizontal scalability.
  • Horizontal scalability is the ability to increase the number of servers to serve the traffic by adding more servers in a pool or load balancer. By having horizontal scalability in place, in case if need to shutdown one or more servers to upgrade the instance type, there would be no significant impact to production environment
  • Setup elastic load balancer to handle all the incoming traffic and distribute it to backend servers
  • Setup, install and configure nginx, mod uwsgi, django and postgresql on all servers with high availability design still comes first in mind
  • Provide multiple tools for monitoring tools, such as nagios, cacti, sentry and flower
  • Design and write a customized deployment script so application deployment can be run remotely on jenkins without having to login to servers
DEVOPSINFRASTRUCTURE

AGENCY PORTAL

AGENCY PORTAL

About The Project

This was the project when i was back in Indonesia and basically i was told to design a highly available infrastructure setup and it must be fault tolerant. So i was the main and the only person working on to setup this infrastructure from scratch

  • Install, setup and configure at least 2 servers with same functionality, for example 2 servers for web frontend, 2 servers for api, and another 2 for its database servers
  • Load balancing are handled by F5 hardware load balancer to split the traffic between web frontend and api servers
  • Setup master slave replication for its database servers. In case of master database failure, application can still be switched manually to use the slave servers
  • Analyze application deployment requirement and write the deployment script so application deployment could be easily deployed by using jenkins either to test, staging or even production environment
.05

CONTACT

Think my experience can help you? Do not hesitate to write!
Drop me a mail

GET IN TOUCH

Now you have seen what i can do and what i have accomplished for all my previous companies and clients. Why not see how i can help your business. If you want me to give more details, please leave me a message by filling this form below or email to [email protected]