Hello Firebase Developers!
We recently launched a major refresh for the Remote Config SDK in our v2 release, and it includes a few changes that will help you keep your app responsive and snappy.
The tl,dr; is that in the v2 SDK we’ve improved Remote Config along the following themes:
Read on for more details on each of these new updates!
One of the biggest requests we’ve had is to add non-blocking initialization and fetch calls for Remote Config params, and for good reason.
Let’s say you’re a developer for a travel app, and you want to show a travel deal based on whether a customer is predicted to spend on hotels at some destination (possibly using Firebase Predictions). With Remote Config, you can surface such travel deals to just those users. Now, let’s say that these deals are surfaced alongside the main parts of your app, where your users find and book travel. What you don’t want is for your users to have to wait for these main parts of your app to get blocked on Remote Config initialization for those travel deals. With the v1 SDK, unless you were very careful about where you placed your Remote Config initialization call, your users would have to wait for Remote Config initialization before being able to use the main parts of the app. In the v2 SDK, this is no longer the case.
By introducing non-blocking initialization, it is now possible to load Remote Config without affecting the load time of the rest of the app. This can improve startup times across the board, but is especially important for users who are in regions of limited or slow network connectivity.
We’ve also added a new convenience method in the V2 SDKs to both fetch and activate your Remote Config values in a single call, which streamlines the activation code flow. Here are how the new calls look for fetching and activating Remote Config parameters before and after the v2 SDK updates:
iOS:
// BLOCKING remoteConfig = RemoteConfig.remoteConfig() // Fetch with a completion handler. remoteConfig.fetch { status, error in if status == .success { remoteConfig.activateFetched() let value = remoteConfig[“myKey”]; } }
// Non-blocking initialization. remoteConfig = RemoteConfig.remoteConfig() remoteConfig.fetchAndActivate { (status, error) in if status == .successFetchedFromRemote || status ==.successUsingPreFetchedData { let value = remoteConfig[“myKey”]; } }
Android:
// BLOCKING FirebaseRemoteConfig frc = FirebaseRemoteConfig.getInstance(); frc.fetch() .addOnSuccessListener(new …<>() { frc.activateFetched(); readFrcValues(); }); void readFrcValues() { value = frc.getInteger(“myKey”); }
// Non-blocking initialization. // Loads values from disk asynchronously. FirebaseRemoteConfig frc = FirebaseRemoteConfig.getInstance(); frc.fetchAndActivate().addOnSuccessListener((unusedVoid) -> readFrcValues()); void readFrcValues() { value = frc.getInteger(“myKey”); }
Another important update for smarter fetching is the introduction of the new FetchTimeoutInSeconds and MinimumFetchIntervalInSeconds parameters, which can be used together to provide smarter fetching policies for your app.
The FetchTimeoutInSeconds parameter is useful when you don’t want your application to wait longer than X seconds to fetch new Remote Config values. It sounds like a small change, but this can actually make a considerable difference in how you build a responsive app for your users.
For example, let’s go back to the travel app. The travel deals portion of the app is conditioned on Remote Config values, but this is for a non-critical part of the app. It doesn’t get in the way of the main booking flow in the app, so it’s not as important if the travel deals load up and show immediately after the user opens the app.
Now let’s say we’ve designed a new booking flow for our app, and want to roll it out to a test group to get some feedback. In this scenario, users will need to wait for a network roundtrip to the Remote Config backend to determine if they’ll get the new booking interface for the test group, or the current booking interface instead.
Waiting for one roundtrip for the interface to load isn’t generally a huge deal, but what if our user is in an area of limited or spotty connectivity? In such cases, the reliability or availability of network infrastructure can dictate the user experience we can provide.
This situation is where being able to set the FetchTimeoutInSeconds becomes really useful. Let’s say we don’t want the test group to experience any Remote Config fetch delays greater than three seconds. Well, using this parameter we can specify exactly that. So if our user does happen to be in an area with spotty network connectivity, we can still ensure they can have a great experience by falling back to either the previously fetched values or the default values.
The new MinimumFetchIntervalInSeconds parameter, as the name implies, allows you to set the minimum interval for which you want to check for any new Remote Config parameter values.
The new parameter name better describes how Remote Config handles retrieving new parameter values from the RC backend over the previous caching terminology, but does not change how Remote Config handles caching. If the default minimum fetch interval of 12 hours is too long for your app’s needs, you can adjust it to a value that’s more appropriate for your app, such as once per hour.
Keep in mind that going beyond that might cause your app to run into rate limits. In case you do hit a rate limit, though, the new error response codes can help deal with that (read the section below for more details).
Setting up these new parameters is straightforward. On Android, you can set both the minimum fetch interval and the fetch timeout parameters directly with the Settings Builder API:
Android
FirebaseRemoteConfigSettings configSettings = new FirebaseRemoteConfigSettings.Builder() .setMinimumFetchIntervalInSeconds(fetchRefreshRate) .setFetchTimeoutInSeconds(fetchTimeout) .build(); mFirebaseRemoteConfig.setConfigSettings(configSettings);
On iOS, you can set similar values on the configSettings object.
configSettings
iOS
let remoteConfigSettings = RemoteConfigSettings() remoteConfigSettings.minimumFetchInterval = 1200 remoteConfigSettings.fetchTimeout = 3 remoteConfig.configSettings = remoteConfigSettings
To enable development mode using the new MinimumFetchIntervalInSeconds parameter, just set the parameter value to 0. Just be sure to set this back before you ship to production!
MinimumFetchIntervalInSeconds
For the Android SDK, we’ve updated how the client SDK communicates with the Remote Config backend. The Remote Config SDK’s previous implementation sometimes resulted in error responses getting eaten up along the way. This would sometimes make it difficult to understand why fetch requests were failing. Providing more informative and meaningful error messages is now improved by replacing networking calls through direct REST API calls to the Remote Config backend. So you’ll now be able to get more meaningful error responses when something goes wrong.
Upgrading is easy! Just update your Remote Config dependencies to the latest version, rebuild your app, and you’ll be set up to use the latest Remote Config V2 SDKs.
We’re always trying to improve and make Remote Config, as well as each of our other Firebase products, more helpful and easier to use. These changes are an effort towards that, so let us know what you think about them! Also, if you discover any bugs with the new updates, we’d love to hear about those too. Please reach out to us on StackOverflow or the official Firebase support site.
That’s right, the Android SDK and the iOS SDK for Remote Config are now both open source! You can check out the GitHub repo for the Android SDK in all its open source goodness here, and the repo for the iOS SDK here.
This article originally appeared in the Firebase Developer Community blog.
We like saying lots of impressive things about Cloud Firestore's performance -- "performance scales with the size of the result set, not the underlying data set", and that "it's virtually impossible to create a slow query." And, for the most part, this is true. You can query a data set with billions upon billions of records in it, and get back results faster than your user can move their thumb away from the screen.
But with that said, we occasionally hear from developers that Cloud Firestore feels slow in certain situations, and it takes longer than expected to get results back from a query. So why is that? Let's take a look at some of the most common reasons that Cloud Firestore might seem slow, and what you can do to fix them.
Probably the most common explanation for a seemingly slow query is that your query is, in fact, running very fast. But after the query is complete, we still need to transfer all of that data to your device, and that's the part that's running slowly.
So, yes, you can go ahead and run a query of all sales people in your organization, and that query will run very fast. But if that result set consists of 2000 employee documents and each document includes 75k of data, you have to wait for your device to download 150MB of data before you can see any results.
The best way to fix this issue is to make sure you're not transferring down more data than you need. One simple option is to add limits to your queries. If you suspect that your user only needs the first handful of results from your employee list, add a limit(25) to the end of your query to download just the first batch of data, and then only download further records if your user requests them. And, hey, it just so happens I have an entire video all about this!
limit(25)
If you really think it's necessary to query and retrieve all 2000 sales employees at once, another option is to break those records up into the documents that contain only the data you'll need in the initial query, and then put any extra details into a separate collection or subcollection. Those other documents won't get transferred on that first fetch, but you can request them later as your user needs them.
Having smaller documents is also nice in that, if you have a realtime listener set up on a query and a document is updated, the changed document gets sent over to your device. So by keeping your documents smaller, you'll also have less data transferred every time a change happens in your listeners.
So Cloud Firestore's offline cache is pretty great. With persistence enabled, your application "just works", even if your user goes into a tunnel, or takes a 9-hour plane flight. Documents read while online will be available offline, and writes are queued up locally until the app is back online. Additionally, your client SDK can make use of this offline cache to avoid downloading too much data, and it can make actions like document writes feel faster. However Cloud Firestore was not designed as an "offline first" database, and as such, it's currently not optimized for handling large amounts of data locally.
So while Cloud Firestore in the cloud indexes every field in every document in every collection, it doesn’t (currently) build any of those indexes for your offline cache. This means that when you query documents in your offline cache, Cloud Firestore needs to unpack every document stored locally for the collection being queried and compare it against your query.
Or to put it another way, queries on the backend scale with the size of your result set, but locally, they kinda scale with the size of the data in the collection you're querying.
Now, how slow local querying ends up being in practice depends on your situation. I mean, we're still talking about local, non-network operations here, so this can (and often is) faster than making a network call. But if you have a lot of data in one single collection to sort through, or you're just running on a slow device, local operations on a large offline cache can be noticeably slower.
First, follow the best practices mentioned in the previous section: add limits to your queries so you're only retrieving the data that you think your users will need, and consider moving unneeded details into subcollections. Also, if you followed the "several subcollections vs a separate top level collection" discussion at the end of my earlier post, this would be a good argument for the "several subcollections" structure, because the cache only needs to search through the data in these smaller collections.
Second, don't stuff more data in the cache than you need. I've seen some cases where developers will do this intentionally by querying a massive number of documents when their application first starts up, then forcing all future database requests to go through the local cache, usually in a scheme to reduce database costs, or make future calls faster. But in practice, this tends to do more harm than good.
Third, consider reducing the size of your offline cache. The size of your cache is set to 100MB on mobile devices by default, but in some situations, this might be too much data for your device to handle, particularly if you end up having most of your data in one massive collection. You can change this size by modifying the cacheSizeBytes value in your Firebase settings, and that's something you might want to do for certain clients.
Fourth, try disabling persistence entirely and see what happens. I generally don't recommend this approach -- as I mentioned earlier, the offline cache is pretty great. But if a query seems slow and you don't know why, re-running your app with persistence turned off can give you a good idea if your cache is contributing to the problem.
So zig-zag merge joins, in addition to being my favorite algorithm name ever, are very convenient in that they allow you to coalesce results from different indexes together without having to rely on a composite index. They essentially do this by jumping back and forth between two (or more) indexes sorted by document ID and finding matches between them.
But one quirk about zig-zag merge joins is that you can run into performance issues where both sets of results are quite large, but the overlap between them is small. For example, imagine a query where you were looking for expensive restaurants that also offered counter service.
restaurants.where('price', '==', '$$$$').where('orderAtCounter', '==', 'true')
While both of these groups might be fairly large, there's probably very little overlap between them. Our merge join would have to do a lot of searching to give you the results you want.
So if you notice that most of your queries seem fast, but specific queries are slow when you're performing them against multiple fields at once, you might be running into this situation.
If you find that a query across multiple fields seems slow, you can make it performant by manually creating a composite index against the fields in these queries. The backend will then use this composite index in all future queries instead of relying on a zig zag merge join, meaning that once again this query will scale to the size of the result set.
While Cloud Firestore has more advanced querying capabilities, better reliability, and scales better than the Firebase Realtime Database, the Realtime Database generally has lower latency if you're in North America. It's usually not by much, and in something like a chat app, I doubt you would notice the difference. But if you have an app that's reliant upon very fast database responses (something like a real-time drawing app, or maybe a multiplayer game), you might notice that the Realtime Database feels… uhh… realtime-ier.
If your project is such that you need the lower latency that the Realtime Database provides (and you're anticipating that most of your customers are in North America), and you don't need some of the features that Cloud Firestore provides, feel free to use the Realtime Database for those parts of your project! Before you do, I would recommend reviewing this earlier blog post, or the official documentation, to make sure you understand the full set of tradeoffs between the two.
Remember that even in the most perfect situation, if your Cloud Firestore instance is hosted in Oklahoma, and your customer is in New Delhi, you're going to have at least 80 milliseconds of latency because of that whole "speed of light" thing. And, realistically, you're probably looking at something more along the lines of a 242 millisecond round trip time for any network call. So, no matter how fast Cloud Firestore is to respond, you still need time for that response to travel between Cloud Firestore and your device.
First, I'd recommend using realtime listeners instead of one-time fetches. This is because using realtime listeners within the client SDKs gives you a lot of really nice latency compensation features. For instance, Cloud Firestore will present your listener with cached data while it's waiting for the network call to return, giving you the ability to show results to your user faster. And database writes are applied to your local cache immediately, which means that you will see these changes reflected nearly instantly while your device is waiting for the server to confirm them.
Second, try to host your data where the majority of your customers are going to be. You have the option of selecting your Cloud Firestore location when you first initialize your database instance, so take a moment to consider what location makes the most sense for your app, not just from a cost perspective, but a performance perspective as well.
Third, consider implementing a reliable and cheap global communication network based on quantum entanglement, allowing you to circumvent the speed of light. Once you've done that, you probably can retire off of the licensing fees and forget about whatever app you were building in the first place.
So the next time you run into a Cloud Firestore query that seems slow, take a look through this list and see if you might be hitting one of these scenarios. While you're at it, don't forget that the best way to see how well your app is performing is to measure its performance out in the wild in real-life conditions, and Firebase Performance Monitoring is a great way of doing that. Consider adding Performance Monitoring to your app, and setting up a custom trace or two so you can see how your queries perform in the wild.