Tag Archives: Docker

Run 1,000 Docker Redis Containers in Less Than 15 Minutes on a Cluster of 5 Cloud Servers with 2GB of Memory Each


Run 1,000 Docker Redis Containers in Less Than 15 Minutes on a Cluster of 5 Cloud Servers with 2GB of Memory Each
// DCHQ.io | Linux Containers | Application Deployment

Background

While application portability (i.e. being able to run the same application on any Linux host) is still the leading driver for the adoption of Linux Containers, another key advantages is being able to optimize server utilization so that you can use every bit of compute. Of course, for upstream environments, like PROD, you may still want to dedicate more than enough CPU & Memory for your workload – but in DEV/TEST environments, which typically represent the majority of compute resource consumption in an organization, optimizing server utilization can lead to significant cost savings.

This all sounds good on paper — but DevOps engineers and infrastructure operators still struggle with the following questions:

  • How can I group servers across different clouds into clusters that map to business groups, development teams, or application projects?
  • How do I monitor these clusters and get insight into the resource consumption by different groups or users?
  • How do I set up networking across servers in a cluster so that containers across multiple hosts can communicate with each other?
  • How do I define my own capacity-based placement policy so that I can use every bit of compute in a cluster?
  • How can I automatically scale out the cluster to meet the demands of the developers for new container-based application deployments?

DCHQ, available in hosted and on-premise versions, addresses all of these challenges and provides the most advanced infrastructure provisioning, auto-scaling, clustering and placement policies for infrastructure operators or DevOps engineers.

  • A user can register any Linux host running anywhere by running an auto-generated script to install the DCHQ agent, along with Docker and the software-defined networking layer (optional). This task can be automated programmatically using our REST API’s for creating “Docker Servers” (https://dchq.readme.io/docs/dockerservers)
  • Alternatively, DCHQ integrates with 13 cloud providers, allowing users to automatically spin up virtual infrastructure on vSphere, OpenStack, CloudStack, Amazon Elastic Cloud Computing, Google Compute Engine, Rackspace, DigitalOcean, SoftLayer, Microsoft Azure, and many others.

Getting Started with Docker: Simplifying Devops


via Introduction to Docker Tutorial | Toptal.

If you like whales, or are simply interested in quick and painless continuous delivery of your software to production, then I invite you to read this introductory Docker Tutorial. Everything seems to indicate that software containers are the future of IT, so let’s go for a quick dip with the container whales Moby Dock and Molly.

Part 3: Setting up a Docker development environment with Vagrant


via Setting up a Docker development environment with Vagrant – Part 3 – ActiveLAMP.

This post is part 3 in the series “Hashing out a docker workflow”. For background, checkout my previous posts.


Now that I’ve laid the ground work for the approach that I want to take with local environment development with Docker, it’s time to explore how to make the local environment “workable”. In this post we will we will build on top of what we did in my last post, Docker and Vagrant, and create a working local copy that automatically updates the code inside the container running Drupal.

Continuous Delivery Testing Pathway


This pathway is a tool to help guide your self development in continuous delivery testing. It includes a variety of steps that you may approach linearly or by hopping about to those that interest you most.

Each step includes:

  • links to a few resources as a starting point, but you are likely to need to do your own additional research as you explore each topic.
  • a suggested exercise or two, which focus on reflection, practical application and discussion, as a tool to connect the resources with your reality.

Take your time. Dig deep into areas that interest you. Apply what you learn as you go.

STEP – Removing release testing

Why does this pathway exist? Understand the key reasons to significantly shorten a release process, the arguments against release testing and why organisations aim to avoid batched releases in agile environments:

EXERCISE
[2 hours] Research your existing release process and talk to people within your organisation to find out whether there are any current initiatives to improve it.

STEP – Introduction to continuous delivery

What is the end goal? Discover the basics of continuous delivery and the theory of how it can be implemented in organisations.

EXERCISE
[1 hour] Based on what you’ve read, try to explain the theory of continuous delivery in your own words to someone in your team. Describe what appeals to you about continuous delivery, what you disagree with, and things that you think will be difficult to implement in your organisation. Afterwards, if you have any remaining questions, raise these with a technical lead or coach for further discussion.

STEP – Experiences in continuous delivery

How are other organisations doing continuous delivery? There is a lot of variance in implementation and differing opinions about how to approach the theory. Understand the realities of the people, processes and tools of teams doing continuous delivery:

EXERCISE
[2 hours] Compare the experiences shared in the links above and the theory of continuous delivery. Identify common themes, and areas where ideas or implementation details differ. Discuss your analysis with a technical lead or coach.

STEP – Starting with continuous integration

What is the first step? Understand the concept of continuous integration:

EXERCISE
[3 hours] At the start of this talk transcript, Jez Humble points out that most people aren’t doing continuous integration. How does the approach to continuous integration in your team differ to the theory? Talk to a developer to confirm your understanding of your branching strategy, the way you use source control management tools, and how you manage merging to master. If you use a continuous integration tool, create a list of the jobs that are used by your team during development, and be sure that you understand what each one does. Reflect on how quickly your team respond to build failures in these jobs, and who takes ownership for resolving these. Discuss this exercise with a technical lead or coach to collaboratively identify opportunities for improvement, then raise these ideas at your next team retrospective.

STEP – Theory of test automation

Continuous delivery puts a lot of focus on test automation. In order to support development of an effective pipeline it’s important to understand common strategies for automation, and the distinction between checking and testing:

EXERCISE
[1 hour] Read through the automation strategy for your product. How well does your existing strategy for automation support your delivery pipeline? What opportunities exist to improve this strategy? Discuss your thoughts with a technical lead or coach.

STEP – A delivery pipeline

Understand how to construct delivery pipeline and the role of automation:

EXERCISE
[3 hours] Create a visual representation of the current delivery pipeline for your product. Use a timeline format that shows the build jobs in your continuous integration tool at every stage from development through to production deploy, any test jobs that execute automated suites, and points where the tester is hands-on, exploring the product. Compare your pipeline to the simplified images by Yassal Sundman forcontinuous delivery and continuous deployment, then reflect on the following questions:

  1. How would your approach to testing change, or not, if we were able to deploy to production 10 times a day? How about 100 times a day?
  2. Does the coverage provided by your automation give you a degree of comfort or confidence? If not, what needs to change?
  3. Does your automation execute fast enough? How fast do you think it should be? How can you achieve this?
  4. Where in the pipeline would you want to retain hands-on testing? How would you justify this?

Discuss your ideas with a technical lead or coach. Work together to identify actions from your thinking and determine how to proceed in implementing change.

STEP – Non-functional testing in continuous delivery

Learn more about integrating security, performance, and other non-functional testing in a continuous delivery pipeline:

EXERCISE
[2 hours] Does your organisation have a non-functional testing “sandwich”? Having read more about organisations who integrate these activities earlier in the process, what opportunities do you see to improve the way that you work? What would the first steps be? Talk to a technical lead or coach about what you’d like to see change.

STEP – Cross-browser testing

For continuous delivery of a web application, it’s important to include cross-browser testing in the delivery pipeline. Discover strategies for cross-browser testing and the tools available to support it:

EXERCISE
[8 hours] Learn more about the common cross-browser tools that are available, understand the advantages and disadvantages of each option, then select a tool to trial. Create a prototype to execute existing browser-based automation for your product across multiple browsers. If successful on your local environment, attempt to create a prototype job in your continuous integration tool to verify that your chosen solution works as part of your continuous integration. Discuss what you learned about the tool and the results of your experiment with a technical lead or coach.

STEP – Test data & databases

Discover the additional considerations around test data in continuous delivery:

EXERCISES
[1 hour] Data is a constant headache for testers. Consider the limitations of the test data in use by your automation. How could you improve the data within your delivery pipelines? How could you improve the way that you locate data for testing? Talk through your ideas with a technical lead or coach.

STEP – Configuration management & environments

An effective delivery pipeline is supported by multiple test environments. Learn more about configuration management, environments and infrastructure services in continuous delivery:

EXERCISES
[1 hour] Talk to your operations or support team about how they provide test environments for continuous integration, the infrastructure required to support a delivery pipeline, and what their plans are for future changes in this space.

STEP – Testing in production

Understand A/B testing and feature toggles:

EXERCISE
[1 hour] Talk to people in your organisation to find out how you currently use feature toggles and how you make decisions about what to keep based on user analytics. Could your approach be more responsive through targeted use of a monitoring tool like splunk? Share your thoughts with a technical lead or coach.

Create centralized logging in Docker containers with NodeJS and Bluemix


via Create centralized logging in Docker containers with Node.js and Bluemix.

When building the microservices that make up a Bluemix application, developers often encounter questions such as: How do you follow the microservices’ states and their log outputs, and what is happening with the different parts of the application? In this article, I show you how to deploy a single microservice—in this case, a Cloud Foundry–based Node.js application—into the Bluemix platform along with one way of creating centralized logging inside Docker containers.

Video Part 1: Building a Microservices using NodeJS & Docker


In this session, we will start with building a simple express micro-service. We will then create a Docker image for the service using both a Docker file as well as the command line. Lastly we will push our image to Docker Hub. The session will run for 30 minutes with 15 minutes for Q&A.

Video Part 2: Building a Microservices using NodeJS & Docker


In this session, we learnt about Docker file, Docker build command, Docker cache and how to optimize the process of building Docker images. We have also demonstrated how to move a docker images between different Docker hosts. The session run for 30 minutes.