Migrating to Vespa Cloud

Migrating a Vespa application to Vespa Cloud is straightforward as applications on Vespa Cloud supports all the same features as your self-hosted Vespa instances, you’re just gaining some new capabilities and avoid the operational work.

The steps to migrate are:

  1. Specify node resources instead of a list of nodes in services.xml.
  2. Remove the hosts.xml file.
  3. Add a deployment.xml file which specifies the zones in which your application should be run.
  4. Set up a job to automatically submit application builds to Vespa.
  5. Feed initial content into your new instances.
  6. Change your feed and query clients to use Vespa Cloud endpoints.

1. Specify node resources instead of a list of nodes in services.xml

With Vespa Cloud you don’t need to list the nodes to use in services.xml, instead you just specify the resources you need:

<nodes count="4">
    <resources vcpu="8" memory="16Gb" disk="200Gb"/>
</nodes>

Resources must match a node flavor on the cloud(s) you are deploying to, see AWS flavors and GCP flavors.

You can also specify ranges for any of these numbers to have Vespa autoscale depending on load. See the nodes reference doc for all options.

2. Remove the hosts.xml file

Since you don’t need to specify individual nodes this file is no longer needed.

3. Add a deployment.xml file which specifies the zones in which your application should be run

Instead of hosts.xml, you’ll need to specify in what zones (and clouds) you want your application deployed by adding a deployment.xml file:

<deployment version="1.0">
    <prod>
        <region>aws-us-east-1c</region>
        <region>aws-us-west-2a</region>
    </prod>
</deployment>

Here you can also specify any details around how and when changes should be rolled out, application endpoints you need, BCP and other deployment aspects, see the deployment reference documentation.

4. Set up a job to automatically submit application builds to Vespa

Vespa Cloud takes responsibility for rolling out your application changes to all of your production zones as well as testing the changes first. Instead of deploying your application package to a specific zone, you submit it to Vespa, which will then deploy it to each zone for you (after testing).

You will usually want to set up a job which automatically builds your application package when changes to it are checked in, to get continuous deployment of your application.

Follow automated deployments to complete this step.

5. Feed initial content into your new instances

After step 4 you’ll have your new cloud application instances up and running and it’s time to initialize them with data. You can find the ‘zone’ endpoint to use under Endpoints in the console cloud console. See configuring mTLS on how to use mTLS certificates.

You can write data efficiently using the document/v1 API using HTTP/2, either directly, with the vespa-feed-client, or with the Vespa CLI.

If you want to feed data from a self-hosted Vespa into your new cloud instances, see the appendix.

6. Change your feed and query clients to use Vespa Cloud endpoints.

Finally, you want to point your regular feeding and query requests to the new instances.

By default you get an mTLS endpoint for each zone. See configuring mTLS on how to use mTLS certificates, and the Endpoints section in the cloud console to get the zone endpoints.

You can also add access tokens in the console as an alternative to mTLS, and specify global and private endpoints in deployment.xml.

See also the http best practices documentation.

Next steps

If you followed the above you’ll have your Vespa application running in Vespa Cloud with fully automated upgrades, continuous deployment, strong security and 24/7 operations provided by Vespa - congratulations!

You may want to set up monitoring to ensure your application behaves as expected on the business metrics you care about.

Appendix

Feeding data from an existing Vespa instance

To dump data from an existing Vespa instance, you can use this command with Vespa CLI:

slices=10
for slice in $(seq 0 $((slices-1))); do
    vespa visit \
        --slices $slices --slice-id $slice \ 
        --target [existing Vespa instance endpoint] \
        | gzip > dump.$slice.gz & 
 done

This dumps all the content to files, but you can also pipe the content directly into ‘vespa feed’.

To feed the data:

slices=10
for slice in $(seq 0 $((slices-1))); do
    zcat dump.$slice.gz | \
      vespa feed \
        --application <tenant>.<application>.<instance> \
        --target [zone endpoint from the Vespa Console] -
done

Note that the different slices in these commands can be done in parallel on different machines.

Accessing your public cloud application from another VPC on another account

A common challenge when deploying on the public cloud, is network connectivity between workloads running in different accounts and VPCs. Within in a team, this is often resolved by setting up VPC peering between VPCs, but this has its challenges when coordinating between many different teams and dynamic workloads. Vespa does not support direct VPC peering.

There are three recommended options:

  1. Use your public endpoints, but IPv6 if you can: The default. There are many advantages to a Zero-Trust approach and accessing your application through the public endpoint. If you use IPv6, you will also avoid some of the network costs associated with IPv4 NATs, etc. For some application, this option could be cost prohibitive, but one should not assume this is the case for all application with a moderate amount of data being transferred over the endpoint.

  2. Use private endpoints via AWS PrivateLink or GCP Private Service Connect: Vespa allows you to setup private endpoints for exclusive access from your own, co-located VPCs. This requires much less administrative overhead than general VPC peering and is also more secure. See the private endpoints documentation.

  3. Run Vespa workloads in your own account/project (Enclave): The Vespa Enclave feature allows you to have all your Vespa workloads run in your own account. In this case, you can set up any required peering to open the connection into your application. While generally available, using Vespa Enclave requires significantly more effort from the application team in terms of operating the service, and is only recommended for larger applications that can justify the additional work from e.g. a security or interoperability perspective. See the Vespa Enclave documentation.