Posted:

Last year, we rolled out support for deploying your Google App Engine application using git, giving you an easy-to-use mechanism for deploying your application on every push to your cloud repository’s master branch.

Today, we’re happy to extend support for this feature to repositories hosted on GitHub. By connecting your App Engine project to your GitHub repository, you can trigger a deployment by pushing to the project’s master branch on GitHub.

Let’s walk through an example.

Prerequisites: If you don’t have the git tool installed, get it here.

Connecting the repository


  1. Go to the Google Developers Console and create a project or click on an existing project that you wish to sync with GitHub.
  2. Click Cloud Development and then Releases in the left-hand navigation panel.
  3. The next step is to link your project’s repository to GitHub. On the Configuration click Connect a GitHub repo.
  4. Enter the GitHub repository URL in the dialog box that appears. The repository URL is in the format https://github.com/username/repository. This is the same URL that you open in your web browser when you are viewing the repository on the GitHub site.
  5. Read and accept the consent option in the dialog box and click Connect.
  6. Authorize access to your repository in the GitHub page that opens.
  7. The GitHub repository now appears on the Releases page and is all set up for Python and PHP development.
  8. If you are setting up this feature for use with a Java application, select the Java: Maven Build, Unit Test, and Deploy option in the Release Type field.
  9. Now, every time you push to your project’s master branch on GitHub using git push origin master the source code will be deployed to App Engine. You can click on the Release History tab to see the status of the current deployment.

This feature makes it easier than ever to deploy your App Engine application hosted on GitHub!

- Posted by Weston Hutchins, Product Manager

Posted:
Our guest post today comes from Olivier Devaux, co-founder of feedly, a reading app founded in 2008 in Palo Alto. feedly offers a free version as well as a Pro version that includes power search and integrations with other popular applications, including Evernote, LinkedIn and Hootsuite.

With over 15 million users, feedly is one of the most popular apps for purposeful reading in the world. People can tailor their feedly accounts to serve up their favorite collection of blogs, web sites, magazines, journals and more. Our goal is to deliver to readers the content that matters to them. Over the past year, we have focused on making feedly the reading app of choice for professionals.

For our first few years, we had around four million users, and we hosted all of the content we aggregated on our own servers. We ran a small instance of Google App Engine to extract picture URLs within articles.

In the middle of last year, our servers were overwhelmed with hundreds of thousands of new signups, and we experienced our first service outage. The first thing we did was move all of our static content to App Engine. Within an hour we were up and running again with 10 times the capacity we had before. This turned out to be a good thing – we added millions more users over the next few months and more than doubled in size.

It’s been almost a year since that day, and we’ve greatly expanded our service with Google Cloud Platform. We now use App Engine as a dynamic content delivery network (CDN) for all static content in feedly, as well as to serve formatted images displayed in the app or desktop.

A fast response time is even more important on mobile, and App Engine helps us load images immediately so that there’s no lag when users scroll through their feeds. As a feedly user scrolls through content, the app sends App Engine information in the background about what articles are coming next. App Engine then fetches images from the article page on the Web, determines the best image, stores it in Cloud Storage and receives a serving URL from the Image service. For users, this leads to a seamless scrolling experience.

To optimize the feedly user experience, we make heavy use of the Memcache API and App Engine Modules and the Taskqueue API. The combined result of these services allows us to cut our response time for user requests in the app down to milliseconds.

As an engineer, one of my favorite things about App Engine is that it generates detailed usage reports so we can see the exact cost of our code, like CPU usage or the amount we’ve spent to date, and continue to optimize our performance.

We learned the hard way what happens when you don’t prepare for the unexpected. But this turned out to be a blessing in disguise, because it prompted us to move to Cloud Platform, and expand and improve our service. App Engine has taken pressure off our small team and allowed us to focus on building the best reading experience for our users. With Google’s infrastructure on the backend, today we only need to worry about pushing code.

- Posted by Olivier Devaux, co-founder of feedly

Posted:
Today, we are making it easier for you to run Hadoop jobs directly against your data in Google BigQuery and Google Cloud Datastore with the Preview release of Google BigQuery connector and Google Cloud Datastore connector for Hadoop. The Google BigQuery and Google Cloud Datastore connectors implement Hadoop’s InputFormat and OutputFormat interfaces for accessing data. These two connectors complement the existing Google Cloud Storage connector for Hadoop, which implements the Hadoop Distributed File System interface for accessing data in Google Cloud Storage.

The connectors can be automatically installed and configured when deploying your Hadoop cluster using bdutil simply by including the extra “env” files:
  • ./bdutil deploy bigquery_env.sh
  • ./bdutil deploy datastore_env.sh
  • ./bdutil deploy bigquery_env.sh datastore_env.sh

Selection_027.png
Diagram of Hadoop on Google Cloud Platform

These three connectors allow you to directly access data stored in Google Cloud Platform’s storage services from Hadoop and other Big Data open source software that use Hadoop's IO abstractions. As a result, your valuable data is available simultaneously to multiple Big Data clusters and other services, without duplications. This should dramatically simplify the operational model for your Big Data processing on Google Cloud Platform.

Here are some word-count MapReduce code samples to get you started:

As always, we would love to hear your feedback and ideas on improving these connectors and making Hadoop run better on Google Cloud Platform.

-Posted by Pratul Dublish, Product Manager

Posted:
Today, we are announcing the release of App Engine 1.9.3.

This release offers stability and scalability improvements, themes that we will continue to build on with the next few releases. We know that you rely on App Engine for critical applications, and with the significant growth we’ve experienced over the past couple years we wanted to take a step back and spend a few release cycles with a laser focus on the core functionality that impacts your service and end users. As a result, new features and functionality may take a back seat to these improvements. That said, we fully expect to continue making progress with existing services, including Dedicated Memcache.

Dedicated Memcache
Today we are pleased to announce the General Availability of our dedicated memcache service in the European Union. Dedicated Memcache lets you provision additional, isolated memcache capacity for your application. For more details about this service, see our recent announcement.

Our goal is to make sure that App Engine is the best place to grow your application and business rapidly. As always, you can find the latest SDK on our release page along with detailed release notes and can share questions/comments with us at Stack Overflow.

Posted:
When Applibot needed a flexible computing architecture to help them grow in the competitive mobile gaming market in Japan, they turned to Google Cloud Platform. When Tagtoo, a online content tagging startup, needed to tap into the power of analytics to better serve digital ads to customers in Taiwan, they turned to Google Cloud Platform. In fact, companies all over the world are turning to Cloud Platform to create great apps and build successful businesses.


Now, more developers in Asia Pacific can experience the speed and scale of Google’s infrastructure with the expansion of support for Cloud Platform. Today we switched on support for Compute Engine zones in Asia Pacific, as well as deploying Cloud Storage and Cloud SQL.  


This region comes with our latest Cloud technology, which includes Andromeda - the codename for Google’s network virtualization stack - to provide blazing fast networking performance as well as transparent maintenance with live migration, and automatic restart for Compute Engine.


In addition to local product availability, the Google Cloud Platform website and the developer console will also be available in Japanese and Traditional Chinese. These websites have updated use cases, documentation and all sorts of goodies and tools to help local developers get started with Google Cloud Platform. Developers interested in learning more about Google Cloud Platform can join one of the Google Cloud Platform Global Roadshow events coming up in Tokyo, Taipei, Seoul or Hong Kong.


The launch of Cloud Platform support in Asia Pacific is in line with our increasing investment in the region and our commitment to developers around the world. To all our customers in the region, we would like to say “THANK YOU / 謝謝 / ありがとう ” for your support of Google Cloud Platform.

-Posted by Howard Wu, Head of Asia Pacific Marketing for Google Cloud Platform

Posted:
Our friends at Google recently published a comprehensive overview of how to manage Google Compute Engine infrastructure via the various automation platforms available. The GCE team invited us to add our perspective on this topic and what follows here is a look at why we love GCE, how our customers are succeeding with Chef+GCE, and technical details on automating GCE resources with Chef.

Chef is betting on Compute Engine
You’ve often heard us reference the ‘coded business’. In short, we propose technology has become the primary touch point for customers. Demand is relentless. And the only way to win the race to market is by automating delivery of IT infrastructure and software.

This macro shift began in part because of Google’s success in leveraging large-scale compute to rapidly deliver goods and services to market. And when we say ‘large-scale’, there aren’t many, if any, businesses with more compute resources, expertise, and experience than Google.

So it makes a ton of sense that Google would pivot their massive compute infrastructure into an ultra-scalable cloud service. Obviously they know what they’re doing and now everyone from startups to enterprises can tap into Google’s compute mastery for themselves.

Working with the Compute Engine team fits perfectly into not only our view of how the IT industry, and business itself, is changing, but also what our customers want. Choice. Speed (lots and lots of speed). Scale. Flexibility. Reliability.

Why customers love using Chef and Google Compute Engine

Cloud-based delivery

Like the Google Cloud Platform, Chef offers customers all the benefits of cloud-based delivery. New users can get instant access to a powerful Enterprise Chef server hosted on the cloud, no credit card is required, and you can manage up to five instances for free.

When you want to use Chef to manage larger numbers of nodes, you add this capability on a simple, pay-as-you-go basis. Customers can get started using Chef to configure GCE in minutes, start to finish. Ian Meyer the Technical Ops Manager at AdMeld (now part of Google) praises the SaaS delivery model of Hosted Chef:

“Prior to deploying Hosted Chef,” said Meyer, “we did everything manually. It generally took me a couple of weeks to get access to the servers I needed and at least a day to add a new developer. With Chef, I can now add a couple of developers within 20 minutes. Additionally, when we set up a new ad serving system with data bags, the set-up time goes from two to three days to an hour. This is simply one of those tools that you need regardless of what your environment is.”

Speed & Scale
Just as customers are choosing GCE for its speed, our customers appreciate how Chef’s execution model pushes the heavy lifting to the Chef client(s) rather than compiling configuration instructions on the server. Chef stands well above the field with a single Chef server handling 10,000 nodes at the default 30-minute update interval.

Flexibility
Our customers tell us that Chef is more flexible than any other offering. When the situation calls for it, Chef allows advanced users to work directly with infrastructure primitives and a full-fledged modern Ruby-based programming language.

Community
Chef customers can tap into the shared knowledge, expertise, and helping hands of tens of thousands of Chef Community members, not to mention over 1000 Chef Cookbooks. The Chef Community provides a vibrant, welcoming resource for learning best practices. In recent years, high profile vendors have contributed and built on top of Chef including Google, Rackspace, Dell, HP, Facebook, VMware, AWS, Rackspace and IBM.

Google will be a featured partner at this year’s ChefConf. Join Google’s Eric Johnson as he shares technical details about Chef’s integration and future roadmap with GCE.

Chef and GCE: Under the Hood
Chef makes it easy to get started with GCE. Once you’ve obtained a GCE account and configured your Chef workstation, you can extend Chef’s knife command-line tool with the knife-google plugin:

gem install knife-google
knife google setup

That last command will walk you through a one-time configuration of your knife workstation with GCE credentials.

Now you can use knife with the cookbooks on your Chef server to deploy infrastructure from Chef recipes to GCE instances. Here’s an example where we use Chef to create a Jenkins master node hosted in GCE:

knife google server create jenkins1 -Z us-central1-a -m n1-highcpu-2 -I debian-7-wheezy-v20131120 -r 'jenkins::master'

This command takes the following actions:

  • Creates a Debian VM instance in GCE’s us-central1-a zone with machine type n1-highcpu-2
  • Registers it as a node named ‘jenkins1’ with the Chef Server
  • Configures the node’s run_list attribute as ‘jenkins::master’
  • Uses the ssh protocol to run chef-client with that ‘master’ recipe from the Jenkins community cookbook on the new system.
At the end of this process, you’ll see a message like the one below:

Chef Client finished, 19/21 resources updated in 40.207903203 seconds

And now you have a Jenkins master. This and similar knife commands may be integrated into automation that can also spin up Jenkins tester systems for a complete continuous integration pipeline backed by GCE.

You can then use Chef Server features like search to manage the pipeline as long as you need it. But since Chef makes deployment so simple, and GCE makes it so fast, you can just destroy part or all of it when it’s no longer needed...
# Commands like this destroy unneeded nodes
knife google server delete tester1 -y --purge

… and recreate nodes ‘just-in-time style’ when demand picks back up again.

The quick turnaround on deployment and convergent configuration updates via Chef + GCE allows teams to experiment with developer automation at very low cost.

To get a deeper sense of how you can exploit the capabilities of GCE, please visit our GCE page outlining details around Chef’s knife-google plugin and explore the community library of coded infrastructure.

-Contributed by Adam Edwards, Platform Engineering at Chef

Posted:
We love seeing our developers create groundbreaking new applications on top of our infrastructure. To help our current and prospective users gain insight into the vast array of these applications, we recently added a new case study. Whether you’re interested in learning about how businesses are building on our platform or just looking for inspiration for your next project, we hope you find it informative.

Kahuna
Kahuna used App Engine to create an automated mobile-engagement engine that would turn people who downloaded a mobile app into truly engaged customers.

Check out cloud.google.com/customers to see the full list of case studies. You can read about companies varying in size, industry, and use cases, who are using Google Cloud Platform to build their products and businesses.

To learn more about Kahuna, please visit www.usekahuna.com.

-Posted by Chris Palmisano, Account Manager