2018 Fin Annual Letter Fin Analytics
Learn how companies are using Fin to better manage remote teams. Learn more.

2018 Fin Annual Letter

There have been some major moves occurring at companies focused on the future of work in the last few months. Robotic Process Automation, a technology that makes it possible to automate repetitive tasks inside businesses, is on fire. UiPath, arguably one of the standard bearers for the RPA industry, just raised a huge round that values it at more than $3 billion after growing from $1 million to $100 million in revenue in less than two years.

Human work aggregators also are doing extremely well. Freelance platform Upwork just had a successful IPO, and TaskUs, a next-generation business process outsourcing firm, just raised $250 million from Blackstone. Many more labor aggregators are benefiting from the new demand of technology companies for content moderation and tagging.

The intellectual narrative also is shifting quickly from discussions of pure-AI futures (which I have always considered fantasy) to practical discussions of a human + machine future. Paul Daugherty, the CTO of Accenture, published a great book called “Human + Machine,” and Kai-Fu Lee’s book on the Chinese viewpoint, “AI Super-Powers,” which is heavily driven by the hybrid-future narrative, is a best seller.

Almost exactly a year ago I wrote a column about the future of work. Specifically, I focused on the role of machine learning and AI in measuring historically unmeasured human knowledge work in a way that could help optimize it.

Building on that theme, over the last year it has become increasingly clear to me that the real way to talk about the future of human knowledge work is as a cloud resource that looks and functions a lot like how Amazon Web Services operates today.

As I see it, today the path forward for knowledge work is using AI and machine learning to effectively build a knowledge-work cloud, with a series of key technical systems that very much resemble what we today use on computing clouds like AWS.

Contextualizing Our Next ‘Industrial’ Revolution

A few centuries ago, new technology like steam power, railroads and pendulum clocks allowed for the Industrial Revolution. These tools enabled people to reorganize how physical work was completed.

Physical production was taken out of distributed, inefficient, and unmeasured piecemeal modules and brought into systems and factories that dramatically increased efficiency, speed and quality. This led to an explosion of prosperity.

The tools alone were just potential. On their own, their impact would have been minimal. It was the tools coupled with the reorganization of human work that led to impact.

In the last few decades, we have unlocked a series of technologies that are every bit as fundamental as those that brought about the Industrial Revolution, and should lead to an explosion of prosperity. Yet human productivity has increased far less than one would expect given the power of our new tools.

The reason that we haven’t yet seen spectacular growth in human knowledge-work productivity is that in order to get the full advantage of our new tools, we need to reorganize the way in which we execute knowledge work. And, thus far, the day-to-day patterns of knowledge work have changed shockingly little.

I believe that we are going to need to build the equivalent of factories for knowledge work if we want to reap the benefits of things like machine learning and AI fully. This is the next great business opportunity that several companies are beginning to recognize and chip away at in various forms.

We aren’t going to see the end of human knowledge work in the foreseeable future. New machines aren’t going to put us out of jobs in the 21st century, just as they didn’t in the successive waves of the industrial revolution a few hundred years ago.

What new technology is doing is allowing us to reorganize how work is done so that human attention can be focused on the most “human” work, and machines can do the most “machine” work.

You can think of this effectively as a modern “Knowledge-Work Cloud” that dramatically increases the speed, efficiency and quality of knowledge work, while providing people with better more flexible jobs focused on completing the most “human” human work.

Our goal at Fin is to build a modern ‘Knowledge-Work Cloud’. Just like cloud-computing platforms, there will likely be a few winners in the space – but not an infinite number. We believe that our engine will dramatically increases the speed, efficiency, and quality of knowledge work while providing people with better more flexible jobs focused on completing the most ‘human’- human work.

Given where we are in the technology cycle, it is pretty clear that the next set of great businesses will be services. They will leverage technology heavily but also have a lot of operational complexity to them. If we get this right we can be the backbone for them the same way AWS has been for a generation of mobile and web companies.

The ‘Knowledge-Work Cloud’ Analogy

In conceptualizing what the future of knowledge work looks like, there are two analogies that are worth exploring. Neither is perfect, but both are highly informative. The first is the evolution of the cloud. The second is the functioning of “the factory.” The cloud analogy expresses something about the benefits of a near-future knowledge-work engine for customers and how they will want to interact with services like these. The “factory” analogy is informative in thinking through how these services should actually function internally as human-computer hybrid clouds.

The Cloud

A generation ago, all businesses owned their own computer hardware. They would buy it from a vendor, wait for it to be delivered and installed, and then spend time and money keeping it running on premises.

The hardware was a capital expense that would depreciate over time. If they bought more hardware than needed, it would sit idle. If they bought too little hardware, they wouldn’t be able to serve their customers or execute the work they needed to do. If new hardware became available, they would have to independently talk to vendors and decide whether the new technology was worth the cost. It was a capital-intense, slow and inefficient process, and many internet companies were killed by errors in forecasting demand in either direction.

Today, of course, almost no one operates their own hardware. Everyone rents infrastructure on demand in the cloud. Per-compute cycle, the cloud can be more expensive than owning your own hardware, but when businesses consider the overall advantages of the cloud, the benefits far outweigh the costs.

The cloud turns fixed cost into variable cost. Dynamic provisioning takes away the challenges of balancing supply and demand for any individual company. No one needs to hire systems administrators or fix hardware that breaks; the computers are maintained by someone else who specializes in maintenance at scale. When new technologies become available, the cloud vendor can figure out how to integrate them to make the cloud more powerful overall, versus each company doing its own analysis and integration work. There even are network effects by allowing servers to be physically near each other. The benefits go on.

The earliest adopters of the cloud were individuals and small businesses for whom the advantages were clear. Large companies took longer than small ones to convert to cloud infrastructure because they had complex requirements, privacy concerns and were already working well enough running their own hardware. But in time, nearly all companies migrated to clouds, because the advantages were clear. Those that didn’t, lost.

Many if not all of these realities for the why and how of the move to the cloud apply to how knowledge work will evolve and how customers will interact with knowledge-work-clouds in the future.We believe that many of these realities apply to how customers will interact with fin in the future.

Hiring, training and managing people is analogous to the challenges of hardware, with similar lead-times, provisioning and maintenance costs. Businesses miss out on opportunities and even fail because they over- or under-provision human attention just as they did hardware.

The cloud solution is synonymous for human knowledge work. We can drive the same benefits for companies doing important work by allowing them on-demand access in a scalable way to a pool of human resources. We as a company can be better at hiring, training and managing people than small companies can. We as a company can do a far better job building in technological efficiencies at scale and managing supply and demand.

It also is likely that we will face the same challenges in adoption that the cloud did. The earliest users will be individuals and small businesses, and large organizations, while intellectually intrigued, are going to take longer to come around. But the opportunity is as massive as cloud computer infrastructure, if not, in fact, bigger.

The Factory

If the cloud analogy tells you something important about why knowledge work is going to move to a cloud model, the factory analogy is informative for thinking about how to build and optimize a modern knowledge-work cloud.

Any well-run factory is a hybrid system of machines and people that is deeply measured and constantly optimized for speed, quality and efficiency.

On the production floor, machines do what machines are best at—moving a production line and executing a certain repeated process over and over. People do what people are best at, making judgment calls and doing detailed work that machines are incapable of doing well and managing quality control. Even the most advanced factories in the world use human attention, intervention and judgment to achieve more efficiency and higher quality than would be possible using machines alone.

The factory itself has several other key functions that keep the overall system as efficient as possible. For a factory to run well, the operators need to balance the supply and demand of work for the factory (too much demand and orders are missed; too little, and the factory runs idle). The operators need to source raw material and talent to operate the factory. The operators need to constantly be measuring and optimizing the production lines. The operators need to do quality control of end products, rework issues and make sure they are hitting customer specifications on time and on budget.

The Cloud Factory for Knowledge Work

It is clear that in 2018 we are still operating in what might be viewed as a pre-industrial age for knowledge work, and just starting to peek into the future. The knowledge tasks completed by office workers all over the world are unmeasured, unoptimized and massively under-leveraged.

It is an enormous missed opportunity that professionals spend upward of half their time on administrative tasks. It is an enormous missed opportunity that people in administrative roles spend upward of half their time on call but idle, waiting for work, without the tools or measurement to optimize their process.

This should and will get fixed, and machine learning and large scale data-structures give us the tools to dramatically empower work. The way we are going to do it is through a cloud knowledge-work engine that can practically deliver the productivity gains from new technologies that we should have unlocked, but aren’t yet seeing.

It is an enormous missed opportunity that professionals spend upward of half their time on administrative tasks. We think we can fix this.

Further, if we are successful at Fin we should unlock all sorts of new small businesses, just as the cloud has. We should make all businesses and professionals far more productive, as the cloud has.

Key Systems: Routing, Workflows and Measurement

If you believe that we will move toward a cloud for knowledge work, one obvious question is what the key subsystems will be of such a system and, specifically, where is there going to be a lot of leverage from machine learning and AI?

Work Routing

The first key component is work routing—moving the right tasks to the right people at any given time. This is very similar to the queuing problem that engineers are familiar with, and which Amazon Web Services provides services for such as with SQS.

The difference is in complexity. Any given knowledge work task might have dozens or hundreds of factors that come into play. How urgent is the work, how long do you expect it to take, who is the best person to do it and when will the person be available? If you give it to someone else, are you trading speed for efficiency or for quality?

Without serious technology, it is hard for people to properly prioritize their own work. It is even harder to take a small team of people and prioritize what each person is doing on properly, and it is basically impossible to properly assign tasks to more than a few dozen people efficiently at scale. Modern technology and machine learning is a huge lever over this problem, and it turns out that assigning the right work to the right people at the right time is a huge lever over productivity overall.

We have seen this first hand at Fin. In the last year we have gotten extremely large and measurable returns from refining how we ‘route’ work to the right different people in different situations. We take into account basic things, like when the task is due and who is free to work on it. We also take into account more sophisticated questions – who as worked on a given task before, who has worked for the user before, which agent available is ‘best’ at the type of work being requested, etc.

Shared Workflows & Context

The second key component where machines can have massive leverage over productivity in a knowledge-work cloud is managing workflows and context. When someone is assigned a knowledge task, the organization overall likely already has some knowledge about how to best complete the task. It doesn’t matter if you are writing a presentation, doing research or booking a flight; there are known best practices and knowledge about how to do the task best.

In our current world, most workflows are transmitted through word of mouth or casual context. Each new employee at a job learns the general shape of the workflows they are responsible for from colleagues over time, and then perhaps tweaks them based on their own beliefs or preferences.

This is no way to run a modern system, and a place that machines can help people be dramatically more efficient and deliver better work. In the modern knowledge-work cloud services, machines will learn the process that people are doing, allow managers to tweak and improve the way they want work done, and then make sure that when work is being done by a person, it is being done with the best practice steps, knowledge and validation.

This is another thing we have made big strides on in the last year with Fin. We evolved from a checklist based system to a flexible workflow building engine that allows us to generate both universal and personalized workflows for different types of tasks. We have built in validations & the ability to make sure that the work done / inputs and outputs at certain steps are logical and properly formatted – and great templating for responses, etc. from the work done.


You can’t optimize what you don’t measure, and knowledge work historically has been extremely unmeasured. I extensively talked about measurement in my column a year ago on the future of work, so I won’t revisit it here in detail.

Suffice it to say, however, that the No. 1 thing that is needed in order to drive the future of a knowledge-work cloud is technology that allows us to measure the process and performance of knowledge workers.

There were a lot of key figures that helped push the Industrial Revolution, but Frederick Taylor, who was the one to take time measurement seriously toward the end of optimizing industrial systems, was the most important early pioneer in the knowledge work revolution.

This is the area we have been investing in the longest – and seen the most dramatic returns from. Early on, we figured out that traditional operations metrics weren’t going to cut it for us – we needed very personalized understanding per operations agent on what was working and what wasn’t. Look for some really exciting announcements from us soon on this front.

Creating Better Jobs

People look back on the history of the first Industrial Revolution and fear that new work systems will make human work worse.

The Industrial Revolution, at least in the short term, was obviously not good for workers. Factories took people out of their homes and away from families, with sometimes brutal and unhealthy working conditions.

How do we not repeat these mistakes?

There are undeniable realities that a knowledge-cloud system should remove specialization and specialized knowledge from individual workers. This is a scary prospect in the extreme. In a system like the one we are discussing here, all knowledge becomes collaboratively shared among the team, which means that workers can’t build personal moats based on what they know or have figured out over time.

The sort of system we are discussing here also exposes people in knowledge-work fields to the brunt of globalization. At least in the U.S., this is clearly a challenge going forward for knowledge workers, as it has been over the last century for traditional factory workers. The world might be getting better off on average, but those at the top of the pyramid in the best economies clearly have more to lose than to gain from as producers.

The answer has to be that as we move toward the future of work, we take as much advantage as possible of the very beneficial aspects of the powerful combination of people and machines, while still acknowledging the challenges.

One very positive aspect of this type of system is that most of the drudgery of simple tasks goes away. People generally don’t like doing things that machines can do. It is demeaning to be asked to do tasks a machine can do. If you can automate it and hand it to a machine, in a knowledge cloud factory, you will do precisely that, and that type of work will largely disappear.

Working in a modern knowledge-work cloud factory should help people optimize to only focus on what they are most capable doing. People aren’t evenly good at all things, but most jobs require people to do things they are very good at and like, as well as other things of which they are less capable and enjoy less. The beauty of the future system we are discussing is that it makes it reasonably easy to balance work across many people, and focus people on the types of human work they are in particular best at. This is good for productivity, and generally should be good for individual satisfaction with work.

This model for knowledge work should also provide a massive amount of flexibility for people on how and when they work, as well as how much they want to work. On-demand jobs show what is possible, but the current reality is that knowledge workers are still largely stuck in offices and working standardized weeks to facilitate collaboration and manage the realities of specialized knowledge held by individuals. Knowledge work should be able to move toward the best aspects of on-demand jobs, where far more people can work as much as they want, when they want, and where they want.

Working in a highly measured and optimized system with great feedback also is highly meritocratic. It is easy to identify and cultivate the hardest working and most talented people. This is generally a good thing, in my mind, though I acknowledge that meritocracy stretched to the extreme creates other challenging social pressures (see the movie “Gattaca” for a great discussion of this challenge).

Ultimately, however, the move toward more productive workplaces has to rely on the idea that people become free to work less. This is not a new vision. The idea that ultimately people benefit from being more productive because it means they don’t need to work as much is as old as technology, and has a spotty track record. But I believe the ultimate dream has to be that if you can use a new organization of knowledge work to dramatically boost productivity and drive down idle or unoptimized time for knowledge workers, they should be better off.


There are countless books that have been written about why the Industrial Revolution happened in England when it did, and not earlier or later.

We have had amazing new technologies at our disposal that are the raw ingredients for creating another industrial revolution of knowledge work for quite some time now, but we haven’t yet seen the payoff.
But I think it is going to come very soon now.

2018 has quietly shaped up to be a big year for the move toward the future of knowledge work. And I believe that in 2019, we are going to start seeing the pieces fall into place for the explicit move of knowledge work into the cloud.

The blueprint we are going to be following is exactly the move to the cloud that we just experienced for computer resources, but the practical impact should be much much greater.

Fin’s Approach in 2019

For the last few years we have been building up the Fin ‘Assistant’ service on an end-to-end AAI ‘cloud factory’ model. We chose to target this use case first because it has several properties we think breed good discipline in building towards the future we are discussing - it forces us to deal with broad and ambiguous tasks, and is relatively easy for a broad set of people to take advantage of early / everyone can use assistance.

Coming in to 2019 our assistant service is coming along nicely. We have thousands of personal and professional customers using fin to get more leverage on-demand in their lives with booking, buying, research, etc.

The economics of the business fundamentally work, we are growing, and we have built the core technical and human services we need to deliver on our vertical use-case as well as — we think – point us towards the knowledge-cloud future we are discussing here.

Going into 2019 you can expect a few things from Fin as a company:

  • Improving Quality, Speed, and Personalization: This is the whole point and we will never be done. Our ‘cloud factory’ means that we are well set up to continuously improve the quality, speed, and personalization of the assistance service we can offer
  • Growth in SMBs and the beginnings of enterprise use:Following much the same pattern as cloud computing has followed over the last decade – we started with individual professionals and small businesses. They are the fastest to adopt new technologies. Today, we have started to service SMBs and teams that understand how much leverage they can open up for their professionals by adding an assistance layer. Look for us to get better and better at servicing SMBs, and from there we will start to engage with larger enterprises. It took AWS years to build the security and compliance functions needed to get enterprises comfortable with their offering – we will get there.
  • The beginning of cloud service offerings: Finally, moving into 2019 look for us to start offering explicit services for operations teams looking to modernize and start moving towards this cloud-factory model. We believe in offering our own end-to-end service … we believe we can do assistance better & it is how we have the experience of operating first-hand… but there is no question that there is a lot of impact to be had in the world by providing tools to help the millions of knowledge workers in the world be more efficient and productive.

Here is to the year ahead – thanks for being part of the broader Fin journey, and if you have any questions, of course feel free to reach out!