"Cloud prem" (cloud + on-premise) is a deployment pattern that's becoming more and more common with companies. Vendors deploy software to their client's cloud provider under an isolated account or a separate VPC (see my SaaS Isolation Patterns). I first heard the term from Tomasz Tunguz's blog.

The practical way that it's applied is packaging up an application as some Terraform or Kubernetes configuration. This is how you might deploy something like Databricks on your cloud. Startups like Replicated offer this as a service by packaging your application up with Kubernetes.

Since vendors don't need to pay for cloud resources, they should theoretically see higher gross margins (avoiding the "cloud tax"). Data and security is no longer an issue because it never leaves the client's account.

But there are downsides, many of which are the reason why we switched to SaaS in the first place.

In the cloud prem model, customers can often stay on previous versions, leading to version skew. In fact, this is often touted as a feature of cloud prem, takings some of the pressure off of internal IT teams to do updates and migrations. Multi-tenant SaaS puts the software service burden on the vendor, only exposing functionality through APIs.

Supporting old versions can severely reduce product velocity at a company. Security patches need to be backported, and data migrations need to be performed for each customer.

Cloud prem deployments inherently don't share resources. If services are completely isolated in a separate cloud account, then there can exist significant redundancy in services (i.e. running a separate Database for the application). This makes it more expensive for customers to run it themselves (in time, since they aren't experts, and in $ because of duplicated resources).

For a more concrete example, take Snowflake and Databricks. Snowflake has a completely cloud-based offering, versus Databricks's cloud prem model. When Snowflake makes an improvement to their data compression or query engine, it can immediately be rolled out to all customers with a behind-the-scenes migration. Databricks can't roll out a change like that as easily, since customers are on different versions.

Customers can opt to fully integrate the application into their account, de-duplicating redundant infrastructure. Except now, the integration problem is even trickier.

Customers will begin to rely on parts of your internal implementation that you didn't plan to expose. To quote Hyrum's Law (read: Keep Your API Surface Small):

With a sufficient number of users of an API,
it does not matter what you promise in the contract:
all observable behaviors of your system
will be depended on by somebody.

Yet, customers are continuing to vie for this model because of compliance concerns. It's much easier to get a new service through security review when there is no chance that sensitive data will leave the customer's cloud account.  

As go-to-market continues to be extremely important, vendors will continue to offer the largest API Surfaces they can to garner adoption. I'm not sure what it will look like when vendors have to maintain these deployments in the long run.