Posted by Dan Ciruli, Product Manager, Google Cloud Platform Team
The Google Feed API was one of Google’s original AJAX APIs, first announced in 2007. It had a good run. However, interest and use of the API has waned over time, and it is running on API infrastructure that is now two generations old at Google.
Along with many of our other free APIs, in April 2012, we announced that we would be deprecating it in three years time. As of April 2015, the deprecation period has elapsed. While we have continued to operate the API in the interim, it is now time to announce the turndown.
As a final courtesy to developers, we plan to operate the API until September 29, 2016, when Google Feed API will cease operation. Please ensure that your application is no longer using this API by then.
Google appreciates how important APIs and developer trust are and we do not take decisions like this one lightly. We remain committed to providing great services and being open and communicative about their statuses.
For those developers who find RSS an essential part of their workflow, there are commercial alternatives that may well fit your use case very well.
Originally Posted on Chromium Blog
Rob Jagnow, Software Engineer, Google VR
Whether you're playing a game or watching a video, VR lets you step inside a new world and become the hero of a story. But what if you want to tell a story of your own?
Producing immersive 3D animation can be difficult and expensive. It requires complex software to set keyframes with splined interpolation or costly motion capture setups to track how live actors move through a scene. Professional animators spend considerable effort to create sequences that look expressive and natural.
At Daydream Labs, we've been experimenting with ways to reduce technical complexity and even add a greater sense of play when animating in VR. In one experiment we built, people could bring characters to life by picking up toys, moving them through space and time, and then replay the scene.
As we saw people play with the animation experiment we built, we noticed a few things:
The need for complex metaphors goes away in VR: What can be complicated in 2D can be made intuitive in 3D. Instead of animating with graph editors or icons representing location, people could simply reach out, grab a virtual toy, and carry it through the scene. These simple animations had a handmade charm that conveyed a surprising degree of emotion.
The learning curve drops to zero: People were already familiar with how to interact with real toys, so they jumped right in and got started telling their stories. They didn't need a lengthy tutorial, and they were able to modify their animations and even add new characters without any additional help.
People react to virtual environments the same way they react to real ones: When people entered a playful VR environment, they understood it was safe space to play with the toys around them. They felt comfortable performing and speaking in funny voices. They took more risks knowing the virtual environment was designed for play.
To create more intricate animations, we also built another experiment that let people independently animate the joints of a single character. It let you record your character’s movement as you separately animated the feet, hands, and head — just like you would with a puppet.
VR allows us to rethink software and make certain use cases more natural and intuitive. While this kind of animation system won’t replace professional tools, it can allow anyone to tell their own stories. There are many examples of using VR for storytelling, especially with video and animation, and we’re excited to see new perspectives as more creators share their stories in VR.
Posted by Pete Warden, Software Engineer
When we started building TensorFlow, supporting mobile devices was a top priority. We were already supporting many of Google’s mobile apps like Translate, Maps, and the Google app, which use neural networks running on devices. We knew that we had to make mobile a first-class part of open source TensorFlow.
TensorFlow has been available to developers on Android since launch, and today we're happy to add iOS in v0.9 of TensorFlow, along with Raspberry Pi support and new compilation options.
To build TensorFlow on iOS we’ve created a set of scripts, including a makefile, to drive the cross-compilation process. The makefile can also help you build TensorFlow without using Bazel, which is not always available.
All this is in the latest TensorFlow distribution. You can read more by visiting our Mobile TensorFlow guide and the documentation in our iOS samples and Android sample. The mobile samples allow you to classify images using the ImageNet Inception v1 classifier.
These mobile samples are just the beginning---we'd love your help and your contributions. Tag social media posts with #tensorflow so we can hear about your projects!
See the full TensorFlow 0.9.0 release notes here.
Posted by Adam Champy, Product Manager for Google Cast SDK
Google Cast makes it easy for developers to extend their mobile experience to the most beautiful screens and speakers in the home.
At Google I/O, we announced our new Google Cast SDK. This new SDK focuses on making development for Cast quicker, more reliable, and easier to maintain. We’ve introduced full state management that helps you implement the right abstraction between your app and Google Cast. We’ve also delivered a full Cast user experience, matching the Google Cast design checklist.
Today we are releasing this SDK for Android and iOS Senders, including an introductory video, full documentation, and reference sample apps and codelab tutorials for both platforms. Initial developer feedback is that first-time implementations can save significant development time compared with our previous SDKs.
A few things we’ve announced will be coming in the next few months, including a customizable Expanded Controller and adding customization to the Mini Controller, to help accelerate development even further.
Drop by our Cast developer site to learn about the new SDK and APIs, and join our developer community on Google+ at g.co/googlecastdev to discuss this with other developers.
Originally posted on Google Research Blog
Originally posted on Firebase blog
For most developers, building an authentication system for your app can feel a lot like paying taxes. They are both relatively hard to understand tasks that you have no choice but doing, and could have big consequences if you get them wrong. No one ever started a company to pay taxes and no one ever built an app just so they could create a great login system. They just seem to be inescapable costs.
But now, you can at least free yourself from the auth tax. With Firebase Authentication, you can outsource your entire authentication system to Firebase so that you can concentrate on building great features for your app. Firebase Authentication makes it easier to get your users signed-in without having to understand the complexities behind implementing your own authentication system. It offers a straightforward getting started experience, optional UX components designed to minimize user friction, and is built on open standards and backed by Google infrastructure.
Implementing Firebase Authentication is relatively fast and easy. From the Firebase console, just choose from the popular login methods that you want to offer (like Facebook, Google, Twitter and email/password) and then add the Firebase SDK to your app. Your app will then be able to connect securely with the real time database, Firebase storage or to your own custom back end. If you have an auth system already, you can use Firebase Authentication as a bridge to other Firebase features.
Firebase Authentication also includes an open source UI library that streamlines building the many auth flows required to give your users a good experience. Password resets, account linking, and login hints that reduce the cognitive load around multiple login choices - they are all pre-built with Firebase Authentication UI. These flows are based on years of UX research optimizing the sign-in and sign-up journeys on Google, Youtube and Android. It includes Smart Lock for Passwords on Android, which has led to significant improvements in sign-in conversion for many apps. And because Firebase UI is open source, the interface is fully customizable so it feels like a completely natural part of your app. If you prefer, you are also free to create your own UI from scratch using our client APIs.
And Firebase Authentication is built around openness and security. It leverages OAuth 2.0 and OpenID Connect, industry standards designed for security, interoperability, and portability. Members of the Firebase Authentication team helped design these protocols and used their expertise to weave in latest security practices like ID tokens, revocable sessions, and native app anti-spoofing measures to make your app easier to use and avoid many common security problems. And code is independently reviewed by the Google Security team and the service is protected in Google’s infrastructure.
Fabulous uses Firebase Authentication to power their login system. Fabulous is a research-based app incubated in Duke University’s Center for Advanced Hindsight. Its goal is to help users to embark on a journey to reset poor habits, replacing them with healthy rituals, with the ultimate goal of improving health and well-being.
The developers of Fabulous wanted to implement an onboarding flow that was easy to use, required minimal updates, and reduced friction with the end user. They wanted an anonymous option so that users could experiment with it before signing up. They also wanted to support multiple login types, and have an option where the user sign-in flow was consistent with the look and feel of the app.
“I was able to implement auth in a single afternoon. I remember that I spent weeks before creating my own solution that I had to update each time the providers changed their API” - Amine Laadhari, Fabulous CTO.
Chu-Day is an application (available on Android and iOS) that helps couples to never forget the dates that matter most to them. It was created by the Korean firm Malang Studio, that develops character-centric, gamified lifestyle applications.
Generally, countdown and anniversary apps do not require users to sign-in, but Malang Studio wanted to make Chu-day special, and differentiate it from others by offering the ability to connect couples so they could jointly countdown to a special anniversary date. This required a sign-in feature, and in order to prevent users from dropping out, Chu-day needed to make the sign-in process seamless.
Malang Studio was able to integrate an onboarding flow in for their apps, using Facebook and Google Sign-in, in one day, without having to worry about server deployment or databases. In addition, Malang Studio has also been taking advantage of the Firebase User Management Console, which helped them develop and test their sign-in implementation as well as manage their users:
“Firebase Authentication required minimum configuration so implementing social account signup was easy and fast. User management feature provided in the console was excellent and we could easily implement our user auth system.” - Marc Yeongho Kim, CEO / Founder from Malang Studio
For more about Firebase Authentication, visit the developers site and watch our I/O 2016 session, “Best practices for a great sign-in experience.”
Posted by Shanea King-Roberson, Lead Program Manager Twitter: @shaneakr Instagram: @theshanea
Do you have an idea for an app but you don’t know where to start? There are over 1 billion Android devices worldwide, providing a way for you to deliver your ideas to the right people at the right time. Google, in partnership with Udacity, is making Android development accessible and understandable to everyone, so that regardless of your background, you can learn to build apps that improve the lives of people around you.
Enroll in the new Android Basics Nanodegree. This series of courses and services teaches you how to build simple Android apps--even if you have little or no programming experience. Take a look at some of the apps built by our students:
The app "ROP Tutorial" built by student Arpy Vanyan raises awareness of a potentially blinding eye disorder called Retinopathy of Prematurity that can affect newborn babies.
And user Charles Tommo created an app called “Dr Malaria” that teaches people ways to prevent malaria.
With courses designed by Google, you can learn skills that are applicable to building apps that solve real world problems. You can learn at your own pace to use Android Studio (Google’s official tool for Android app development) to design app user interfaces and implement user interactions using the Java programming language.
The courses walk you through step-by-step on how to build an order form for a coffee shop, an app to track pets in a shelter, an app that teaches vocabulary words from the Native American Miwok tribe, and an app on recent earthquakes in the world. At the end of the course, you will have an entire portfolio of apps to share with your friends and family.
Upon completing the Android Basics Nanodegree, you also have the opportunity to continue your learning with the Career-track Android Nanodegree (for intermediate developers). The first 50 participants to finish the Android Basics Nanodegree have a chance to win a scholarship for the Career-track Android Nanodegree. Please visit udacity.com/legal/scholarship for additional details and eligibility requirements. You now have a complete learning path to help you become a technology entrepreneur or most importantly, build very cool Android apps, for yourself, your communities, and even the world.
All of the individual courses that make up this Nanodegree are available online for no charge at udacity.com/google. In addition, Udacity provides paid services, including access to coaches, guidance on your project, help staying on track, career counseling, and a certificate upon completion for a fee.
You will be exposed to introductory computer science concepts in the Java programming language, as you learn the following skills.
To enroll in the Android Basics Nanodegree program, click here.
See you in class!
Posted by Wesley Chun (@wescpy), Developer Advocate, Google Apps
At Google I/O 2016, we launched a new Google Sheets API—click here to watch the entire announcement. The updated API includes many new features that weren’t available in previous versions, including access to functionality found in the Sheets desktop and mobile user interfaces. My latest DevByte video shows developers how to get data into and out of a Google Sheet programmatically, walking through a simple script that reads rows out of a relational database and transferring the data to a brand new Google Sheet.
Let’s take a sneak peek of the code covered in the video. Assuming that SHEETS has been established as the API service endpoint, SHEET_ID is the ID of the Sheet to write to, and data is an array with all the database rows, this is the only call developers need to make to write that raw data into the Sheet:
SHEETS
SHEET_ID
data
SHEETS.spreadsheets().values().update(spreadsheetId=SHEET_ID, range='A1', body=data, valueInputOption='RAW').execute()
rows = SHEETS.spreadsheets().values().get(spreadsheetId=SHEET_ID, range='Sheet1').execute().get('values', []) for row in rows: print(row)
If you’re ready to get started, take a look at the Python or other quickstarts in a variety of languages before checking out the DevByte. If you want a deeper dive into the code covered in the video, check out the post at my Python blog. Once you get going with the API, one of the challenges developers face is in constructing the JSON payload to send in API calls—the common operations samples can really help you with this. Finally, if you’re ready to get going with a meatier example, check out our JavaScript codelab where you’ll write a sample Node.js app that manages customer orders for a toy company, the database of which is used in this DevByte, preparing you for the codelab.
We hope all these resources help developers create amazing applications and awesome tools with the new Google Sheets API! Please subscribe to our channel, give us your feedback below, and tell us what topics you would like to see in future episodes!
Posted by Laurence Moroney, Developer Advocate
“Without standards, things don’t work right,” said Alex Danilo, a Googler working on the HTML5 specs, trying to help us all build a better web.
In 1999, the Mars Climate Orbiter mission failed because of a bug, where onboard software represented output in one standard of measurement, while a different software module needed data in a different format. Alex discusses many other examples of how the lack of industry standards can result in problems, such as early rail systems having different gauge widths in different states, impeding travel.
Alex works with the Web Platform Working Group, whose charter is to continue the development of the HTML language, improving client-side application development, including APIs and markup vocabularies.
He shares with us details of the upcoming HTML 5.1, a refinement of HTML 5, showing us the great validator tool that makes it easier for developers to ensure that their markup is meeting standards, and the test the web forward initiative to help uncover bugs and compatibility issues between browsers.
You can learn more about Google and Web development at the Web Fundamentals site.
Posted by Akshay Kannan, Product Manager
Today we're launching Nearby on Android, a new surface for users to discover and interact with the things around them. This extends the Nearby APIs we launched last year, which make it easy to discover and communicate with other nearby devices and beacons. Earlier this year, we also started experimenting with Physical Web beacons in Chrome for Android. With Nearby, we’re taking this a step further.
Imagine pulling up a barcode scanner when you’re at the store, or discovering an audio tour while you’re exploring a museum–these are the sorts of experiences that Nearby can enable. To make this possible, we're allowing developers to associate their mobile app or a website with a beacon.
A number of developers have already been building compelling proximity-based experiences, using beacons and Nearby:
Getting started is simple. First, get some Eddystone Beacons- you can order these from any one of our Eddystone-certified manufacturers. Android devices and and other BLE-equipped smart devices can also be configured to broadcast in the Eddystone Format.
Second, configure your beacon to point to your desired experience. This can be a mobile web page using the Physical Web, or you can link directly to an experience in your app. For users who don’t have your app, you can either provide a mobile web fallback or request a direct app install.
Nearby has started rolling out to users as part of the upcoming Google Play Services release and will work on Android devices running 4.4 (KitKat) and above. Check out our developer documentation to get started. To learn more about Nearby Notifications in Android, also check out our I/O 2016 session, starting at 17:10.
Posted by Rob Jagnow, Software Engineer, Google VR
At Daydream Labs, we pair engineers with designers to rapidly prototype virtual reality concepts, and we’ve already started to share our learnings with the VR community. This week, we focus on social. In many of our experiments, we’ve found that being in VR with others amplifies and improves experiences in VR, as long as you take a few things into account. Here’s what we’ve learned so far:
Simplicity can be powerful: Avatars (or the virtual representations of people in VR) can be simplified to just a floating head with googly eyes and still convey a surprising degree of emotion, intent, and number of social cues. Eyes give people a location to look to and speak towards, but they also increase face-to-face communication by making even basic avatars feel more human. When we combine this with hands and a spatially-located voice, it comes together to create a sense of shared presence.
Connecting the real and the virtual: Even when someone is alone in VR, you can make them feel connected. For example, you can continue to carry a conversation even if you’re not in VR with them. Your voice can serve as a subtle reminder that they’re spanning two spaces—the real and the virtual. This asymmetric experience can be a fun way to help ground party games where one player is in VR but other players aren’t, like with charades or Pictionary.
But when someone else joins that virtual world with them, we’ve seen time and time again that the real world melts away. For most multiplayer activities, this is ideal because it makes the experience incredibly engaging.
Join the party: When you first start a VR experience with others, it can be tough to know where to begin. After all, it’s easier to join a party than to start one! Create shared goals for multi-player experiences. When you give people something to play with together, it can help them break the ice, allow them to make friends, and have more fun in VR.
You think you know somebody: Lastly, people who know each other offline immediately notice stature or differences in a person’s height in VR. We can re-calibrate environments to play with height and scale values to build a VR world where everyone appears to be the same height. Or we can adjust display settings to make each person feel like they’re the tallest person in the room. Height is such a powerful social cue in the real world and we can tune these settings in VR to nudge people into having more friendly, prosocial interactions.
If you’d like to learn more about Daydream Labs and what we’ve learned so far, check out our recent Lessons Learned from VR Prototyping talk at Google I/O.
Posted by Uttam Kumar Tripathi, Global Lead, Developer Agency Program
Back in December 2015, we had shared our initial plans to offer a unique program to software development agencies working on mobile apps.
The Agency Program is an effort by Google’s Developer Relations team to work closely with development agencies around the world and help them build high quality user experiences. It includes providing agencies with personalized training through local events and hangouts, dedicated content, priority support from product and developer relations teams, and early access to upcoming developer products.
Over the past few months, the program drew a lot of interest from hundreds of Agencies and we have since successfully launched this program in a number of countries including India, UK, Russia, Indonesia, USA and Canada.
Having worked with various agencies for several months, the Agency Program has now launched certification for those partners that have undergone the required training and have demonstrated excellence in building Android applications using our platforms. The Agency Program hopes that doing so would make it easier for clients who’re looking to hire an agency to make an informed decision while also pushing the entire development agency ecosystem to improve.
The list of our first set of certified agencies Agencies is available here.
We do plan to review and add more agencies to this list over the year and also expand the program to other countries.
Posted by Karin Levi, Firebase Marketing Manager
This year's Google I/O was an exciting time for Firebase. In addition to sharing the many innovations in our platform, we also hatched a time-traveling digital fish named Firebass.
Firebass is an Alternate Reality Game (ARG) that lives across a variety of static web pages. If you haven’t played it yet, you might want to stop reading now and go fishing. After you’ve caught the Firebass and passed the challenge, come back -- we’re going to talk about how we built Firebass.
We partnered with Instrument, a Portland-based digital creative agency, to help us to create an ARG. We chose ARG because this allowed us to utilize developers’ own software tools and ingenuity for game functionality.
Our primary objective behind Firebass was to make you laugh, while teaching you a little bit about the new version of Firebase. The payoff for us? We had a blast building it. The payoff for you? A chance to win a free ticket to I/O 2017.
To begin, we needed to establish a central character and theme. Through brainstorming and a bit of serendipity, Firebass was born. Firebass is the main character who has an instinctive desire to time-travel back through prior eras of the web. Through developing the story, we had the chance to revisit the old designs and technologies from the past that we all find memorable -- as you can imagine, this was really fun.
We put together a functional prototype of the first puzzle to test with our own developers here at Google. This helped us gauge both the enjoyment level of the puzzle and their difficulty. Puzzle clues were created by thinking of various ways to obfuscate information that developers would be able to recognize and manipulate. Ideas included encoding information in binary, base64, hex, inside images, and other assets such as audio files.
The core goal with each of the puzzles was to make them both logical but not too difficult -- we wanted to make sure players stayed engaged. A bulk of the game’s content was stored in Firebase, which allowed us to prevent players from accessing certain game details too early via inspecting the source code. As an added bonus, this also allowed us to demonstrate a use-case for Firebase remote data storage.
One of our first challenges was to find a way to communicate a story through static web pages. Our solution was to create a fake command line interface that acted as an outlet for Firebass to interact with players.
In order to ground our time travel story further, we kept the location of Firebass consistent at https://probassfinders.foo/ but changed the design with each puzzle era.
After establishing the Pro Bass Finders site and fake terminal as the centerpieces of the game, we focused on flushing out the rest of the puzzle mechanics. Each puzzle began with the era-specific design of the Pro Bass Finders home page. We then concepted new puzzle pieces and designed additional pages to support them. An example of this was creating a fake email archive to hide additional clues.
Another clue was the QR code pieces in puzzle 2.
The QR codes demonstrate Firebase time-based read permissions and provide a way to keep players revisiting the site prior to reaching the end of puzzle 2. There were a total of three pieces of a QR code that each displayed at different times during the day. It was really fun and impressive to see all of the different ways players were able to come up with the correct answer. The full image translates to ‘Locating’, making the answer the letter ‘L’, but many players managed to solve this without needing to read the QR code. You're all smart cookies.
Puzzle 3 encompassed our deep nostalgia for the early web, and we did our best to authentically represent the anti-design look and feel of the 90s.
In one of the clues, we demonstrated Firebase Storage by storing an audio file remotely. Solving this required players to reference Firebase documentation to finish writing the code to retrieve the file.
<!-- connect to Firebase Storage below --> <script> console.log('TODO: Complete connection to Firebase Storage'); var storageRef = firebase.app().storage().ref(); var file = storageRef.child('spectrogram.wav'); // TODO: Get download URL for file (https://developers.google.com/firebase/docs/storage/web/download-files) </script>
While the contest was still active, players who completed the game were given a URL to submit their information for a chance to win a ticket to Google I/O 2017. After the contest was closed, we simply changed the final success message to provide a URL directly to the Firebass Gift Shop, a treasure in and of itself. :)
This was an unforgettable experience with a fervently positive reaction. When puzzle 3 unlocked, server traffic increased 30x! The community response in sharing photos, Slack channels, music, jokes, posts, etc. was incredible. And all because of one fish. We can’t wait to see all the swimmer winners next year at I/O 2017. Until then, try playing the game yourself at firebase.foo. Thank you, Firebass. Long may you swim.
><(((°<
Google Forms has become a popular tool for polling colleagues or creating a pop quiz for students. Creating a few Forms manually may be manageable, but what if you needed to create hundreds or thousands? This is where Google Apps Script can help you scale, by giving you the ability to generate Google Forms programmatically. In this latest edition of the Launchpad Online developer video series, we focus on the JavaScript snippet below that shows you just how easy it is to automate the creation of Google Forms:
function createForm() { // create & name Form var item = "Speaker Information Form"; var form = FormApp.create(item) .setTitle(item); // single line text field item = "Name, Title, Organization"; form.addTextItem() .setTitle(item) .setRequired(true); // multi-line "text area" item = "Short biography (4-6 sentences)"; form.addParagraphTextItem() .setTitle(item) .setRequired(true); // radiobuttons item = "Handout format"; var choices = ["1-Pager", "Stapled", "Soft copy (PDF)", "none"]; form.addMultipleChoiceItem() .setTitle(item) .setChoiceValues(choices) .setRequired(true); // (multiple choice) checkboxes item = "Microphone preference (if any)"; choices = ["wireless/lapel", "handheld", "podium/stand"]; form.addCheckboxItem() .setTitle(item) .setChoiceValues(choices); }
If you’re ready to get started, you can find more information, including another intro code sample, in the Google Forms reference section of the Apps Script docs. In the video, I challenge viewers to enhance the code snippet above to read in “forms data” from an outside source such as a Google Sheet, Google Doc, or even an external database (accessible via Apps Script’s JDBC Service) to generate multiple Forms with. What are other things you can do with Forms?
One example is illustrated by this Google Docs add-on I created for users to auto-generate Google Forms from a formatted Google Doc. If you’re looking to do integration with a variety of Google services, check out this advanced Forms quickstart that uses Google Sheets, Docs, Calendar, and Gmail! Finally, Apps Script also powers add-ons for Google Forms. To learn how to write those, check out this Forms add-on quickstart.
We hope the DevByte and all these examples inspire you to create awesome tools with Google Forms, and taking the manual creation burden off your shoulders! If you’re new to the Launchpad Online developer series, we share technical content aimed at novice Google developers, as well as discuss the latest tools and features to help you build your app. Please subscribe to our channel, give us your feedback below, and tell us what topics you would like to see in future episodes!
Turning a great app into a successful business requires more than simply releasing your app and calling it a day. You need to quickly adapt to your user’s feedback, test out new features and deliver content that your users care about most.
This is what Firebase Remote Config is made for. By allowing you to change the look and feel of your app from the cloud, Firebase Remote Config enables you to stay responsive to your user’s needs. Firebase Remote Config also enables you to deliver different content to different users, so you can run experiments, gradually roll out features, and even deliver customized content based on how your users interact within your app.
Let's look at what you can accomplish when your wire up your app to work with Remote Config.
We've all had the experience of shipping an app and discovering soon afterwards that it was less than perfect. Maybe you had incorrect or confusing text that your users don't like. Maybe you made a level in your game too difficult, and players aren't able to progress past it. Or maybe it was something as simple as adding an animation that takes too long to complete.
Traditionally, you'd need to fix these kinds of mistakes by updating those values in your app's code, building and publishing a new version of your app, and then waiting for all your users to download the new version.
But if you've wired up your app for Remote Config in the Firebase platform, you can quickly and easily change those values directly in the cloud. Remote Config can download those new values the next time your user starts your app and address your users' needs, all without having to publish a new version of your app.
Firebase Remote Config allows you to deliver different configurations to targeted groups of users by making use of conditions, which use targeting rules to deliver specific values for different users. For example, you can send down custom Remote Config data to your users in different countries. Or, you can send down different data sets separately to iOS and Android devices.
You can can also deliver different values based on audiences you've defined in Firebase Analytics for some more sophisticated targeting. So if you want to change the look of your in-app store just for players who have visited your store in the past, but haven't purchased anything yet, that's something you can do by creating Remote Config values just for that audience.
Remote Config conditions also allow you to deliver different values to random sets of users. You can take advantage of this feature to run A/B tests or to gradually rollout new features.
If you are launching a new feature in your app but aren't sure if your audience is going to love it, you can hide it behind a flag in your code. Then, you can change the value of that flag using Remote Config to turn the feature on or off. By defining a "My New Feature Experiment" condition that is active for, say, 10% of the population, you can turn on this new feature for a small subset of your users, and make sure it's a great experience before you turn it on for the rest of your population.
Similarly, you can run A/B tests by supplying different values to different population groups. Want to see if people are more likely to complete a purchase if your in-app purchase button says, "Buy now" or "Checkout"? That's the kind of experiment you can easily run using A/B tests.
If you want to track the results of these A/B tests, you can do that today by setting a user property in Firebase Analytics based on your experiment. Then, you can filter any of your Firebase Analytics reports (like whether or not the user started the purchase process) by this property. Watch this space for news on upcoming improvements to A/B testing.
Many of our early partners have already been using Firebase Remote config to test out changes within their app.
Fabulous, an app from Duke University's designed to help people adopt better lifestyle habits, wanted to experiment with their getting started flow to see which methods were most effective for getting their users up and running in their app. They not only A/B tested changes like images, text, and button labels, but they also A/B tested the entire onboarding process by using Remote Config to determine what dialogs people saw and in what order.
Thanks to their experiments with Remote Config, Fabulous was able to increase the number of people who completed their onboarding flow from 42% to 64%, and their one-day retention rate by 27%.
Research has shown that an average app loses the majority of their users in the first 3 days, so making these kinds of improvements to your app's onboarding process -- and confirming their effectiveness by conducting A/B tests -- can be crucial to ensuring the long-term success of your app.
When you use remote config Remote Config, you can supply all of your default values locally on the device, then only send down new values from the cloud where they differ from your defaults. This gives you the flexibility to wire up every value in your app to be potentially configurable through Remote Config, while keeping your network calls lightweight because you're only sending down changes. So feel free to take all your hard-coded strings, constants, and that AppConstants file you've got sitting around (it's okay, we all have one), and wire 'em up for Remote Config!
Firebase Remote Config is part of the Firebase platform and is available for free on both iOS and Android. If you want to find out more, please see our documentation and be sure to explore all the features of the Firebase SDK.
Posted by Richard The, Google Creative Lab
We wanted to build something cool for the new lobby in our New York City office, something that was big, could display a lot of interesting stuff, and (most importantly) was really fun to play and interact with. A giant screen would have been pretty easy, but touching a screen just isn’t that much fun. So we went with something a little different: six thousand light-up arcade buttons acting as interactive pixels, controlled through a web page.
Then we had a thought. Wouldn’t it be even more fun if all developers, coders, and makers could build their own display running apps they created for it?
AnyPixel.js is an open-source software and hardware library created here at Google, making it possible to use the web to create big, unusual, interactive displays out of all kinds of things. Anyone can take and hack the code and the schematics to create their own display at any scale or level of expertise.
Find out more about the project, or how to create your own at goo.gl/anypixel.