In recent weeks we’ve published a number of posts as part of our series on Kubernetes and Google Container Engine. If this is your first foray into these blogs, we suggest you check out past Kubernetes blog posts.

Containers are emerging as a new layer of abstraction that make it easier to get the most out of your VM infrastructure. In this post, we'll take a look at the implications of running container-based applications on fleets of VMs and we’ll talk about why container clusters reduce deployment risk, foster more modular apps, and encourage sharing of resources.

Container Images
The first building block of a containerized application is the container image. This is a self-contained, runnable artifact, which brings with it all of the dependencies necessary to run a particular application component. The VM analogy is an ISO image, which usually contains an entire operating system and everything else installed on the machine. Unlike an ISO, a container image holds only a single application component and can be booted as a running container that shares an OS and host machine with other containers. The same app running on containers could be several GBs smaller depending on your Linux distro and the number of VMs. That will mean faster deployments and easier management.

Reducing Deployment Risk
You may have experienced deploying your application onto VMs in production  only to find that something has gone horribly wrong and you need to rollback quickly. The code may have worked on the developer’s machine, but once you run your deployment process you discover that an installation is failing for some unknown reason.

With container images, you can run an offline process (meaning not during deployment) that produces a reusable artifact that can be deployed to a container cluster. In this model, issues that would affect your deployment (like installation failures) are caught earlier and out of the critical path to production. This means you have more time to react and correct any issues, and rolling back is easier and less risky  just replace the container image with the previous version.

Modular App Components
As you're designing and building your application, it’s tempting to just add more pieces onto your existing VMs. The hard part is unwinding these pieces into modular chunks that can be scaled independently. When you suddenly run out of VM capacity, you can’t deliver a reliable service to your users. So it’s important to quickly add resources without re-architecting.

When you create a Kubernetes container cluster (for example, via Google Container Engine) you’re giving your app logical compute, memory, and storage resources. And it’s really easy to add more. Since your application components don’t care where they run, you have two independent tasks to complete:
  1. Create a fleet of VMs to host your containers
  2. Create and run containers on your fleet of virtual machines
Using containers for your application components and using Kubernetes as an abstraction layer makes your app naturally more modular. Of course, it’s possible to have modularity on VMs with well designed scripts, but with containers it’s hard not to design modular applications!

Shared Resources and Forecasting
For your application containers to run together on arbitrary computers, they need an agreement about what they're allowed to do. Container clusters establish a declarative contract between resource needs and resource availability. We don’t recommend that you use containers as secure trust boundaries, but running trusted containers together and relying on VMs let’s you get the most utilization within your existing VM boundaries.

Another problem you may face is how to forecast capacity across multiple people and applications. Your team can use Kubernetes to share machines while being protected from noisy neighbors via resource isolation in the kernel. Now you can see resources across your teams and apps, aggregating numerous usage signals that might be misleading on their own. You can forecast this aggregate trend into the future for more cost effective use of hardware resources.

The decoupling of applications from the container cluster separates the operational tasks of managing applications and the underlying machines. Modular applications are easier to scale and maintain. Building container images before deployment reduces the risk that you’ll discover installation problems when it’s too late. And sharing resources leads to better utilization and forecasting, making your cloud smarter.

If you’d like to take container clusters for a spin yourself, sign up for a free trial and head on over to Google Container Engine.

-Posted by Brendan Burns, Software Engineer

Many developers containerize their application so that it can run on any infrastructure; however, it’s still too hard to run containers on a private cloud. Together with Mirantis, we’ve integrated Kubernetes, our open source container manager, in OpenStack. This integration will make it easier to run your apps on a private cloud, while enabling new “hybrid” cloud possibilities. To learn more, sign up for the waiting list.

New “Hybrid” Possibilities
If your company is bigger than a startup, you probably have both on-premises and public cloud infrastructure to host your portfolio of apps. This “hybrid” approach is great in theory: your on-premises infrastructure offers control and you can scale to the public cloud when necessary. Unfortunately, it’s not always easy to take advantage of this flexibility—it’s still too hard to move workloads between infrastructures.

With Kubernetes powering both your private and public clouds, you’ll be able to unlock the power of a hybrid infrastructure. For example, you might run a primary instance of your application in a private cloud, and then replicate other instances to Google Container Engine in geographies where you don’t have on-premises infrastructure.

Learn More
To learn more about how we’re working together with Mirantis, read their blog post. And feel free to stop by the Kubernetes Gathering on February 25th in San Francisco to see Mirantis give a full demo.

-Posted by Kit Merker, Product Manager, Google Cloud Platform

Deploying a new build is a thrill, but every release should be scanned for security vulnerabilities. And while web application security scanners have existed for years, they’re not always well-suited for Google App Engine developers. They’re often difficult to set up, prone to over-reporting issues (false positives)—which can be time-consuming to filter and triage—and built for security professionals, not developers.

Today, we’re releasing Google Cloud Security Scanner in beta. If you’re using App Engine, you can easily scan your application for two very common vulnerabilities: cross-site scripting (XSS) and mixed content.

While designing Cloud Security Scanner we had three goals:
  1. Make the tool easy to set up and use
  2. Detect the most common issues App Engine developers face with minimal false positives
  3. Support scanning rich, JavaScript-heavy web applications
To try it for yourself, select Compute > App Engine > Security scans in the Google Developers Console to run your first scan, or learn more here.

So How Does It Work?
Crawling and testing modern HTML5, JavaScript-heavy applications with rich multi-step user interfaces is considerably more challenging than scanning a basic HTML page. There are two general approaches to this problem:

  1. Parse the HTML and emulate a browser. This is fast, however, it comes at the cost of missing site actions that require a full DOM or complex JavaScript operations.
  2. Use a real browser. This approach avoids the parser coverage gap and most closely simulates the site experience. However, it can be slow due to event firing, dynamic execution, and time needed for the DOM to settle.
Cloud Security Scanner addresses the weaknesses of both approaches by using a multi-stage pipeline. First, the scanner makes a high speed pass, crawling, and parsing the HTML. It then executes a slow and thorough full-page render to find the more complex sections of your site.

While faster than a real browser crawl, this process is still too slow. So we scale horizontally. Using Google Compute Engine, we dynamically create a botnet of hundreds of virtual Chrome workers to scan your site. Don’t worry, each scan is limited to 20 requests per second or lower.

Then we attack your site (again, don’t worry)! When testing for XSS, we use a completely benign payload that relies on Chrome DevTools to execute the debugger. Once the debugger fires, we know we have JavaScript code execution, so false positives are (almost) non-existent. While this approach comes at the cost of missing some bugs due to application specifics, we think that most developers will appreciate a low effort, low noise experience when checking for security issues—we know Google developers do!

As with all dynamic vulnerability scanners, a clean scan does not necessarily mean you’re security bug free. We still recommend a manual security review by your friendly web app security professional.

Ready to get started? Learn more here. Cloud Security Scanner is currently in beta with many more features to come, and we’d love to hear your feedback. Simply click the “Feedback” button directly within the tool.

-Posted by Rob Mann, Security Engineering Manager

For those of you developing applications on the cloud, performance is often a critical concern. It turns out that it’s surprisingly difficult to evaluate cloud offerings beyond just looking at price or feature charts. When we looked at how our own users could measure the relative performance of Google Cloud Platform, it was clear they struggled with this exact problem.

We wanted to make the evaluation of cloud performance easy, so we collected input from other cloud providers, analysts, and experts from academia. The result is a cloud performance benchmarking framework called PerfKit Benchmarker. PerfKit is unique because it measures the end to end time to provision resources in the cloud, in addition to reporting on the most standard metrics of peak performance. You'll now have a way to easily benchmark across cloud platforms, while getting a transparent view of application throughput, latency, variance, and overhead.

We created a visualization tool, Perfkit Explorer, to help you interpret the results. We’re including a set of pre-built dashboards, along with data from actual network performance internal tests. This way, you'll be able to play with the PerfKit Explorer without having to first input your data.

We’re releasing the source code under the ASLv2 license, making it easy for contributors to collaborate and maintain a balanced set of benchmarks. If you want something to be removed or added, we welcome your participation through github.

PerfKit is a living benchmark framework, designed to evolve as cloud technology changes, always measuring the latest workloads so you can make informed decisions about what’s best for your infrastructure needs. As new design patterns, tools, and providers emerge, we'll adapt PerfKit to keep it current. It already includes several well-known benchmarks, and covers common cloud workloads that can be executed across multiple cloud providers.
Sample Dashboard of Compute Performance
Over the last year, we’ve worked with over 30 leading researchers, companies, and customers and we're grateful for their feedback and contributions. Those companies include: ARM, Broadcom, Canonical, CenturyLink, Cisco, CloudHarmony, CloudSpectator, EcoCloud/EPFL, Intel, Mellanox, Microsoft, Qualcomm Technologies, Inc., Rackspace, Red Hat, Tradeworx Inc., and Thesys Technologies LLC. In addition, we’re excited that Stanford and MIT have agreed to lead a quarterly discussion on default benchmarks and settings proposed by the community.

We hope you find the tools useful and easy to use.

- Posted by the Google Cloud Platform Performance Team

From popular mobile apps (Foursquare) to acclaimed indie films (The Grand Budapest Hotel), some of the world’s most creative ideas have debuted at the annual SXSW Festival in Austin, Texas. For over 25 years, SXSW's goal has been to bring together the most creative people and companies to meet and share ideas. We think one of those next great ideas could be yours, and we’d like to help it be a part of SXSW.

Do you have an idea for a new app that you think is SXSW worthy? Enter it in the Google Cloud Platform Build-Off. Winners will receive up to $5,000 in cash. First prize also includes $100,000 in Google Cloud Platform credit and 24/7 support, and select winners will have the chance to present their app to 200 tech enthusiasts during the Build-Off awards ceremony at SXSW.

Here’s how it works:

  • Develop an app on Google Cloud Platform that pushes the boundary on what technology can do for music, film or gaming
  • Enter on your own or form teams of up to 3 members
  • Submit your application between 5 - 28 February 2015
  • Apps will be evaluated based on their originality, effectiveness in addressing a need, use of Google tools, and technical merit

Visit the official Build-Off website to see the full list of rules and FAQs and follow us at #GCPBuildOff on G+ and Twitter for more updates. We cannot wait to see what innovation your creativity leads to next.

- Posted by Greg Wilson, Head of Developer Advocacy


The real-time web is increasingly all around us tracking your ride approaching on a map while you wait, seeing up-to-date statistics on a live dashboard, getting fresh messages about your friends’ social activities. These are all ways in which we’ve come to expect our online experience to reflect reality in real-time. Bringing users timely insight requires more sophisticated handling of data data that's being generated and processed in ever-increasing volume and diversity.

By real-time, we don’t mean it in the strict sense used by hardware controllers, but near real-time information generally perceived to be more or less immediate, useful within seconds rather than minutes. The diversity of solutions available to address real-time oriented problems is great. Many tools exist to help tackle different pieces of the real-time application puzzle. Google Cloud Platform provides a wide range of components to help you build real-time oriented solutions. Examples include high performance infrastructure, such as Google Compute Engine virtual machines, advanced container capabilities in the form of Google Container Engine, stream-oriented big data services, as well as a fully managed database oriented toward real-time applications, Firebase.

We’ve recently documented two tutorials demonstrating how you can compose platform services into specific solutions for real-time oriented problems. Both leverage the power of Google BigQuery to perform fast, insight-yielding queries on large amounts of data that grow and update in real-time relative to their sources. BigQuery supports a streaming option for loading data and these two initial solutions both leverage that API.

The first tutorial outlines how to perform sentiment analysis on a Twitter stream using BigQuery. To handle the high volume intake, the tutorial demonstrates how to use Kubernetes-managed containers to build an asynchronous, real-time pipeline that buffers Twitter data in Redis before streaming into BigQuery. The second tutorial demonstrates how to use the log forwarding tool Fluentd with a BigQuery connector to stream log data into BigQuery. This data can then be visualized with a chart directly in Google Spreadsheets with native Apps Script support for BigQuery.

On the mobile front, Firebase is a managed database with a powerful, developer-friendly API to store and sync data in real-time. Mobile and web client side SDKs automatically sync local data state whenever the data is changed in the database. Particularly useful for mobile developers is the fact that this synchronization is completely tolerant of offline local data changes – even when users go into airplane mode or descend into the subway.

This is just a glimpse into the myriad of ways in which tomorrow's real-time problems can be solved today on Google Cloud Platform. We'll continue to add other real-time oriented solutions and information to the section on solutions for real-time.

-Posted by Preston Holmes, Cloud Solutions Architect

Today’s guest blog comes from Christian F. Howes, Vice President of Engineering for StarMaker Interactive, developer of an entertainment platform based on singing and music video apps.

As indicated by the popularity of singing competition shows on TV, karaoke has gone mainstream. But being on TV isn’t always of interest to the hobbyist singer; sometimes all they want is to feel like the lead singer of their favorite song. That’s why StarMaker was created. We built “StarMaker: Sing + Video” as a simple tool that lets people sing along to their tunes of choice. We partnered with the hit reality singing TV show The Voice for our app, “The Voice: On Stage.” Last summer we added video capture capability, which allows fans to create cover videos of their favorite songs and submit them to the show’s casting agents.

In the very early stages of our company, we chose Google App Engine because of its promise of infinite scalability. Fans loved the app and we quickly outgrew the free quotas of App Engine, but soon after partnering with The Voice, our downloads really skyrocketed. So many people were using our app that we needed to support four billion minutes of video singing time (that’s over 7400 years worth of songs). For a small company without any system administrators, this was both exciting and terrifying. It meant we needed a very easy-to-manage and robust platform that allowed our small team to support huge amounts of traffic.

Just when we were beginning to worry about scaling to meet the demand of our customers, I found out that our app would be featured in Apple’s App Store. This would bring even more traffic, meaning I’d have to hire a systems administrator and do a lot more hands-on work monitoring and troubleshooting. When I tried to increase our App Engine quotas, I was met with a pleasant surprise – all I needed to do was increase my plan with App Engine. No extra admins needed, no extra time managing server configurations and load balancing – we could just sit back and keep coding while the App Engine team did all the heavy lifting.

As the audience for our apps grew, we continued to rack up reasons why App Engine was the best solution for us. Admittedly, the first bit of code I'd written for StarMaker wasn't the most polished some might call it "bad code"  yet, running on App Engine, the app never crashed. By reevaluating how I laid out my data for BigTable and re-writing my query code to take advantage of key queries, memcache, and the NDB API, we saw about 10x performance gains, and a significant drop in resource usage.

We see sustained requests for downloads and data around the clock since our users are worldwide, yet there’s been almost no down time in two years, even with occasional bad code. And today, we still don’t have any sys-admins on staff – just engineers who can focus on creating new features (like video capture) instead of worrying about keeping servers up and running. The cost and resource savings we have seen by using App Engine has allowed us to continue growing our business by focusing our skills on building a great product. Specifically, App Engine's automatic scaling feature (of both the compute instances and the datastore) together with the monitoring tools in place have eliminated our need for a dedicated sys-ops staff person. At our current scale, we would need 1-2 full time sys-ops staff on a more traditional hosting system.

Additionally, based on our usage spikes, in a more traditional hosting environment we would need to have enough capacity for about 2.5x our "steady state" traffic. These spikes happen several times a week for us, so we'd almost always be paying for about 2x the capacity than we need at any given time. App Engine's minute-by-minute scaling and billing saves us as much as $1,500-$3,000 USD per month.

App Engine is filled with tools that help make our products more appealing and engaging for users. For instance, task queues allow us to complete non-blocking tasks while responding quickly to client API requests. One such example is sending a notification to alert a friend about a newly shared video. We can quickly queue a task from the API call and the task will be serviced a short while later on another module. This "infrastructure" comes for free with App Engine – we're not spending time managing queues, just writing great code. We're currently in the process of rolling out an integrated notification system. This system will allow our users to subscribe to different types of notifications and choose a delivery method for each type. We support Apple Push Notification Service (APNS), Google Cloud Messaging (GCM), email, and in-app messages – things we could not be doing without a combination of App Engine's task queues and manual scaling modules.

We’re excited about the like-minded community of music lovers we’ve created since we launched StarMaker, and App Engine is helping us to keep that community growing.

- Contributed by Christian F. Howes, Vice President of Engineering, StarMaker Interactive