Hide

Sign Up

Get our monthly newsletter in your inbox.

Oops! Something went wrong while submitting the form.
see all feed
see all Podcasts
see all Elements
{feed}
No items found.

MIT Professor Rama Ramakrishnan on How ChatGPT Works

June 27, 2023
Written By
June 27, 2023
Season 4 Episode 5
46:34
Written By

MIT Professor Rama Ramakrishnan joins Vivek on the pod to delve into the evolution of Generative AI and ChatGPT, as well as his own journey as an entrepreneur turned business school professor.

Few technologies have captured the attention of the general public as well as seasoned software engineers and entrepreneurs like ChatGPT. But how does it work? How can humans use it to increase productivity and output? And how is it likely to evolve?

Our guest for this episode is Rama Ramakrishnan, Professor of the Practice, Data Science and Applied Machine Learning, at MIT Sloan School of Management. He joins Vivek for a wide-ranging conversation about ChatGPT and generative AI, with amusing asides into why Vivek rejected a job offer from Rama at one of his early start ups, why the extinction conversation about AI is misguided, and how Rama is enjoying academia after a storied career as an entrepreneur.

Guest: Rama Ramakrishnan

LinkedIn: https://www.linkedin.com/in/ramar/

Twitter: https://twitter.com/rama100

Transcript

Speaker 1:

Welcome to The Closed Session, How To Get Paid in Silicon Valley, with your host, Tom Chavez and Vivek Vaidya.

Vivek Vaidya:

Welcome back to season four of the closed session. I'm Vivek Vaidya. And as you know, we host very interesting conversations here with some very interesting people. And today, this conversation is a special one because not only are we going to talk about a very, very relevant topic of generative AI and ChatGPT, but I'm going to be talking to someone whom I've been at the privilege of working with at Salesforce. And then we actually go way back. There is a very interesting story about how we got to know each other. So our guest today is Professor Rama Ramakrishnan, who is a distinguished professor of the practice at MIT Sloan School of Business.

And Rama is an entrepreneur, an enterprise software executive, and now a professor. So we're going to have a meandering conversation with him about various topics related to ChatGPT and generative AI. The way I met Rama was that he interviewed me for a job at one of his early startups. And I didn't take the job, but then our paths crossed again at Salesforce. So without further ado, here's Rama. Very so excited to have you here, Rama. Welcome.

Rama Ramakrishnan:

Thank you, Vivek. Great to be here. Thanks for having me. And let me just say at the very outset that I have never gotten over your rejecting my job offer from many, many years ago. And it was all brought home to me very vividly when we met again at Salesforce. The opportunity cost of what happened was very vivid for me.

Vivek Vaidya:

And I think I mentioned this to you, the one factor that led to that decision was inertia, really. 'Cause you guys were based in Boston and I was here in the Bay Area, and when I started my job search, this was way back in 1999, I was very open to the idea of living in Boston. But then when I got confronted with, now I have option A and option B, inertia one. And so anyway, it's great to-

Rama Ramakrishnan:

Yeah, never bet against human inertia, right? That's sort of the moral of the story.

Vivek Vaidya:

That's true, that's true. So I know I gave a brief overview during introduction, but Rama, it'd be great for you to talk about your journey with our audience, and I'm very interested in what led you to now becoming a professor after being an entrepreneur and all of that. So talk us through your journey a little bit.

Rama Ramakrishnan:

Yeah, happy to. So I grew up in India, and then after an undergrad degree in engineering, I came to the U.S. for grad school. I got a master's in operations research, which if you haven't heard the phrase, it's basically sort of applied math and econ put together with a little dash of computer science data on top. So I did that then I worked in the airline industry for a couple of years, building optimization, scheduling, pricing systems, things like that. Then I went back to grad school, to MIT, got a PhD.

And my PhD was actually in a very theoretical area, part of combinatorial optimization. So I did it with someone who's actually currently the head of the math department at MIT, so it was very technical. And I think now I'm safe in saying that my PhD work has basically seen no practical use for it up to this point. So weirdly enough, I'm proud of that. But after my PhD, I was always very interested in business and I was also very interested in math and computer science and so on, and I really couldn't pick which way to go. So when McKinsey's came recruiting at MIT and they were looking for what they call non-traditional hires, I was like, "Yep, I'm a nontraditional hire."

 So anyway, make long story short, I ended up spending a bunch of years at McKinsey. And then I spent about 20 plus years as an entrepreneur being part of various startups. And really the common threat to all these startups was to identify an interesting business problem where I felt that the use of data and algorithms could actually lead to a much better set of decisions compared to what the incumbent approach was. And so I did that along with a bunch of other people in a few different industries, asset management, transportation, retail and then my most recent startup was a company called CQuotient. We built a machine learning based personalization platform for e-commerce, and we ended up selling the company to Demandware, which at the time was the largest cloud-based transaction platform in the world.

And then shortly thereafter, Salesforce acquired Demandware, and Demandware became Salesforce Commerce Cloud. And then CQuotient became Salesforce Einstein for e-commerce. And that's in fact where Vivek, you and I, recrossed our paths. And I spent a few years there, and then I got a bit restless, I wanted to do something else so I left. And that sort of coincided with MIT reaching out to me about coming on board as a professor of the practice. And the off practice in the title essentially denotes someone who has a PhD who has been out in industry for a long time, practicing their craft. And the MIT would like to attract them back so that they can actually teach not just a theory of what needs to be done, but also how do you actually apply it to create interesting new products and services and of course further on companies. So I've been on the faculty for I believe four, four and a half years now.

Vivek Vaidya:

Wow. We have someone who works at Superset, he was at Sloan and he was telling me last night that your classes are one of the most sought after classes in the business school.

Rama Ramakrishnan:

Oh, wow. Okay. That's so nice to hear. Thanks for sharing that.

Vivek Vaidya:

And if I can say this, unlike your PhD, your industry work still lives on, and the work you did at CQuotient is still powering recommendations and billions of dollars of value for businesses across the world.

Rama Ramakrishnan:

Thank you. Thank you, Vivek. That's that's very kind of you.

Vivek Vaidya:

Yeah, yeah, yeah. So generative AI, ChatGPT.

Rama Ramakrishnan:

Yes.

Vivek Vaidya:

That's the new hotness, right?

Rama Ramakrishnan:

Right.

Vivek Vaidya:

So if you can quickly, if possible, just give a history of what GPT is and what does GPT-3, what does GPT-4, what do these numbers mean and how do we get here?

Rama Ramakrishnan:

Yeah, it's actually an incredibly interesting story. Maybe just roll the clock back a little bit. There is this notion of what's called a language model, and now we sort of use the phrase large language model very casually. But if you go back, the idea of a language model was could we really build a statistical model which, given a phrase in, say, the English language, it can predict the probability that, that phrase will occur in the wild. You'll actually use that phrase somewhere. So from that perspective, if the phrase, let's say, is the mat sat on the cat, hopefully the model will say, "Look, that's really unlikely to happen." While the cat sat on the mat, that's hopefully very likely to happen if the model is any good.

 And then if you follow along a bit more, you realize that instead just giving a phrase and saying, Hey, how likely is it, you can give part of a phrase to a model and say, Hey, what is the most likely next word that's going to pop up? Sort of statistically, what is the most plausible thing that you'd like to see? And so when you give it, the cat sat on the, hopefully it'll come back and say mat, and not spaceship or something, right? So that is the fundamental idea of a language model. And so people have been working on different ways of basically building these models by training them on vast quantities of text data.

And those things were moving nicely along, things were getting better and better. And then at some point people decided that they were going to use these deep neural networks as the underlying mechanism to build the model as opposed to some other technique. And then things suddenly improved. And then somebody else came along and said, "You know what, why don't we use this new neural network architecture called the transformer?" Instead of whatever was used previously. "Let's drop that in and see if it gets any better."And suddenly it got significantly better, right?

And so the first incarnation of this thing at work was GPT, which stands for generative for obvious reasons, pre-trained, which I'll get to in a second, and then transformer, because it uses a transformer. And the pre-trained here just means that we take this thing and we actually pre-train it on a vast amount of data. Just imagine all of Wikipedia, for instance. And the notion of pre-training is actually very subtle and clever. So perhaps I can just spend a moment on it?

Vivek Vaidya:

Yeah, yeah.

Rama Ramakrishnan:

So typically if you go back to traditional machine learning, there is this notion of, "Okay, we're going to learn to do something just looking at a whole bunch of input-output examples." So you give it a picture and you have to figure out if it's a dog or a cat in the picture. Well, you got to come up with a label, which says if it's a dog or a cat. And when you create a hundred thousand of these pictures and therefore a hundred thousand of these labels, and then you're off to the races. But obviously finding all these labels or affixing all these labels is very labor-intensive.

 So there's always this quest to see, can we actually do any kind of clever shortcut so we don't have to do any labeling, right? So when you're working with language, what people figured out early on was language actually has these built-in labels for you so that you can be totally lazy about this. Oh,

Vivek Vaidya:

Oh, interesting.

Rama Ramakrishnan:

So, so for instance, you can tell the model, "Hey, the input is the cat sat on the, and I want you to predict the next word. And the right next word is mat." Similarly, I can give it elementary my dear... And the right label is Watson.

Vivek Vaidya:

Watson, yeah.

Rama Ramakrishnan:

Right. So basically you can take phrases, the first part of it and make it the input and take the last part of it and make it the output, right? It's like this unbelievable, basically free and cheap, zero cost label generator, right? And so what that meant was that you could train all these models with abundantly and zero cost label data. And so that's what they did. So that's what pre-training means. Basically training on zero cost labels is pre-training.

Vivek Vaidya:

When you narrate it like that, it's fascinating how a lot of these breakthroughs that happen in all aspects of technology are clever engineering, clever techniques or tricks that people use to take ideas from one domain and then apply them to another. So instead of labeling the image as dog or cat, you're like, "Oh, I'm going to take all the sentences, break them up in certain ways, and then assign the completion as the output for the pre-completion or the prompt, which was the input. It's very clever.

Rama Ramakrishnan:

Exactly. It's very clever. And by the way, this is an example of just one technique in a whole area of techniques called self-supervised learning. And that is in fact the basis for a whole bunch of amazing models we are seeing today. It was just one way of doing it. There are many ways of implementing the same idea. You take the input, essentially take part of the input and make it the actual input and take the rest of the input and make it the label, right? But that's exactly right. So there are a lot of these beautiful tricks that came together to make it work, and that was GPT.

 And it was quite good. It was better than the previous alternatives. Then they were like, "You know what, let's just now build a bigger model." You can always make these more. One of the things people realize when working with these deep neural networks is that if you can use a bigger network, i.e. if there is enough data for you to keep the bigger network happy, then you'll actually come up with a better model, you've better predictions, better performance, and so on. So there's always this sort of thrust to build bigger and bigger models as long as you can feed them enough compute and data.

Vivek Vaidya:

And that's key. The data part is key.

Rama Ramakrishnan:

It is key. It's key.

Vivek Vaidya:

I think Peter Norvig said it best where he said, more data beats better algorithms, but better data beats more data. So you're spot on, that not just more data, but if you feed it better data, it will give you better results.

Rama Ramakrishnan:

Exactly. Exactly. So that's certainly has been the case here. And so they did GPT-2, it was better than GPT, everything was good. Then they did GPT-3. So GPT-3 has 175 billion parameters, meaning is basically just a proxy for how big the neural network is. GPT-2, if I recall, had 15 billion parameters. So this was a [inaudible 00:13:11] jump in the number of parameters. And what they suddenly began to realize was that GPT-3 started to show what is called emergent behavior. This feels a bit like Skynet, where suddenly the terminator eyes open and you suddenly see this red thing.

It's not like that, of course. So what I mean by emergent behavior is that you have a model and you know, test it on some task, like say arithmetic reasoning or whatever, and it's does a pretty okay job, not terrific. It's okay. Some kind of basically not very much better than random. So the models gets bigger and bigger, it's still sitting around and hovering around random, and then suddenly you make the next bigger size the model, suddenly the thing shoots way above random.

So you can imagine a curve where basically it's kind of flat like that, and suddenly there is a ramp up. And so it's like, "Wait a second, the architecture is the same, the model, everything we are doing is the same, it's just that we are using a bigger model." Suddenly it has woken up and it's doing some really clever things even though the model that was it's predecessor, which is a one 10th of it size, just couldn't do anything all that good. So going back to physics, you can think of it as a phase transition. So things are just getting more and more and more, suddenly something dramatically different happens.

Vivek Vaidya:

There's a change point almost in the process.

Rama Ramakrishnan:

Yeah, exactly. I think someone describes it, which I always liked, as a quantitative change in something leads to a qualitative change in something. The quantitative change is the number of parameters, but the qualitative changes before it could not solve a reasoning problem. Now suddenly it can solve a reasoning problem. Like, "Whoa." So GPT-3 was pretty much the first time that this was compellingly obvious that when you looked at the data... The other thing they realized about GPT-3 was that not only could it do things that the smaller ones could not do, it could do new kinds of tasks for which it was not even trained to do.

Vivek Vaidya:

Oh, wow.

Rama Ramakrishnan:

And the way they would do it is they wouldn't even have to change the weights or anything. You tell it, "Hey, I want you to do X," and then you just give it a few examples of what you want it to do. It'll just learn from those examples, quote unquote, "in real time" and solve it for you, even though the models internals have not changed. So if you think about traditional machine learning, when you want the model to do a particular task really well, guess what? You got to collect a whole bunch of data, train it like crazy, update all the weights and stuff like that and then you have a hope of it working okay.

But here, the weights haven't changed. You're just giving it a few examples in the input, and suddenly it sort of learns what happens. Now, there's a separate question as to whether it's actually learning from your examples or whether the examples essentially sort of locates the model in the right part of where it needs to operate. But that's sort of in the weeds kind research question. But the point is that it does what's called in-context learning. You give some examples, it just learns on the fly for a completely new task. So that was a bit of a shocker. Nobody expected that to happen. And in my opinion, I think that's really what put GPT-3 on the map. And I think obviously for deserved, good reasons.

Vivek Vaidya:

You mentioned in-context learning, and there's also this term called fine-tuning. Are they the same? What are the differences between the two if they're not the same?

Rama Ramakrishnan:

Actually that's a really good question. Fine tuning is, let's say you have a model and you want to use it to solve a particular task. And turns out it's not all that great at doing that task, or it's not good enough for your needs. In which case, what you want to do is you want to collect a whole bunch of input-output examples of the model being able to do that task well, much like you would do in traditional machine learning. You would collect a whole bunch of input or examples, and then you just train the model using supervised learning, just like you would train any other model. Once you're done with that, this is fine-tuning, once you're done with that, now we have a fine-tuned model whose weights have actually changed permanently for the better.

Vivek Vaidya:

Correct, correct.

Rama Ramakrishnan:

Right? But in-context learning is different where instead of amassing, let's say several hundred examples and changing the weights via fine-tuning, you literally tell the model, for example, "I want you to translate English sentences to French. Here's an English sentence, here's a translation, here's English sentence..." And you give it 10 of these examples. And then you give the 11th English sentence alone and then the model learns to basically auto complete the translation for you. Right? So when you do this, note that the weights of the model haven't changed.

Vivek Vaidya:

Haven't changed. So the key is are you altering the model in any way, shape, or form or not, right?

Rama Ramakrishnan:

Exactly.

Vivek Vaidya:

And then where does few shot learning, where does that fit in? All these terms that people throw around, right?

Rama Ramakrishnan:

Yeah, exactly. I know. People use a lot of crazy terminology in the space. It's hard to keep it straight. So what I just explained to you where I said we just tell it, we give it, say, a few examples of what it should do, and then essentially give it an incomplete example, which it will then auto complete, this is an example of few shot.

Vivek Vaidya:

I see, I see.

Rama Ramakrishnan:

So you're basically giving few examples. So whenever you say the word shot, just do a search and replace with the word example.

Vivek Vaidya:

Right. Okay.

Rama Ramakrishnan:

Few shot just means few examples. Similarly, there's zero shot, where you give it no examples. Typically just say, "Hey, translate this English sentence to French for me," and you just give it English sentence, it'll translate it for you, no examples needed. So zero shot and few shot, they're all examples of in context learning.

Vivek Vaidya:

Right? Exactly.

Rama Ramakrishnan:

Because all of them only change the input. They don't change the internals of the model.

Vivek Vaidya:

And that's the key difference. Is that with fine-tuning, you're actually modifying the model in some way, shape or form. Either the weights of the model change or maybe an extra layer gets added to the deep net or whatever happens. So the model changes, but the amount of time perhaps, that it takes to fine tune the model is not as long as it would've taken a trained GPT-3, for example. That's where the advantage of something like fine-tuning comes in?

Rama Ramakrishnan:

Exactly. I think of it as building the model from scratch is in the order of months and millions of dollars. While fine-tuning it is in the matter of hours and not a whole lot of money.

Vivek Vaidya:

But then it's interesting. Now coming back to Peter Norvig and what we were talking about earlier, if you're fine-tuning then the quality of those a hundred or thousand examples that you used to fine-tune, that becomes super important. No?

Rama Ramakrishnan:

Super important. Super important. Absolutely. I think, in fact, I just saw something maybe a couple of days ago, this model called LIMA, L-I-M-A, where they were able to take the plain vanilla equivalent of GPT-3, which is basically the LLaMA family. And then they used to exactly, I believe, a thousand examples, thousand carefully curated, assembled examples. And that combination was able to do really, really well compared to some of the model which had tens of thousands of these examples. So I think it's sort of a few high quality examples tends to be disproportionately effective when you're doing business.

Vivek Vaidya:

Yeah. I think that's been interesting to see all this transition from model-centric AI to data-centric AI now. And now with generative AI, it's become more and more in the front lines that you just have to pay a lot more attention to the quality of your data as opposed to tuning your model and optimizing hyper parameters or those types of things.

Rama Ramakrishnan:

Exactly. Exactly. And that's a great observation, Vivek. And I just want to add something, which is that you can actually use generative AI models like ChatGPT and LLaMA and so on, to create synthetic data for you to train other models.

Vivek Vaidya:

Correct? Correct, yeah.

Rama Ramakrishnan:

The picture has gotten even more crazy. So you have your own human created high quality data, then you have synthetic data created by a model. Obviously, because it's coming from a model, you can be profligate with it and just crank out thousands and thousands of examples. So you have this perpetual, a few good things, or a lot of not so good things, which is better kind of thing. So this problem has only gotten worse.

Vivek Vaidya:

Yeah, but that's another example of using generative AI to create data for training is another example to me at least, of a clever trick that that en engineers or data scientists use to do more with less, so to speak.

Rama Ramakrishnan:

No, I think that's a great example of a very clever trick. When I first read this, I think the paper is called self-instruct, if I recall. I was like, "Wow." I think they start out with 175 human created examples, and then use this thing to create 52,000 examples, and then use that for doing the instruction fine-tuning based model.

Vivek Vaidya:

Interesting.

Rama Ramakrishnan:

And that's how they built this thing called alpaca, which is one of the first open source [inaudible 00:22:17] models to come out.

Vivek Vaidya:

So one of the things that's been consistent throughout our discussion so far is that the importance and use of data in all of this. And all of these models OpenAI... Let's talk about OpenAI. They're managed and run by OpenAI. So every time somebody wants to use OpenAI, they have to send data over to OpenAI. Now, if I'm doing it as a hobby, who cares? I downloaded some data from the internet, I send it over. But if I'm an enterprise that is using OpenAI, should I be concerned about the data I'm sending them? How should I think about that?

Rama Ramakrishnan:

Well, first of all, I do think that OpenAI has, I heard this from someone, that they do have very clear written down policies on how exactly the data is treated, what they do with it, and things like that. And my guess is that if you're an enterprise customer of OpenAI, probably it'll be something which you know, can run it through your legal and so on and so forth to make sure that you truly understand exactly what happens. And I haven't seen it myself, so I can't comment on it directly, but I've heard hearsay that they do have these policies in place. So the first line of defense, of course, is to figure out exactly what happens to your data.

But if you're not an enterprise customer, if you're just using OpenAI to do something, I think it's worth trying to figure out what happens. So for example, I mean there are at least two kinds of risks. I think one needs to be cognizant of. One of course, is that your data, to the extent that there is some privacy and confidentiality around it, your data is now actually going into the cloud and it's going to become part of something else. And now it becomes part of training the model, then the model might regurgitate it to somebody else without your knowledge. That is actually the, in my opinion, a very important problem that you need to worry.

Vivek Vaidya:

Yeah. And I think it's kind of come out because you have GitHub copilot at which helps people write code, and it was trained on all of the source code available in GitHub, yes, it was all open source code, but you feed it some of your own code. And to your point, it might learn from that. And suddenly when I use it, I get your code as output and that might not be okay with you.

Rama Ramakrishnan:

Exactly. Or for example, let's say that you want to create a personalized sales email, right? So you send the information about your prospect, along with, I've spoken to them three times, they seem very excited about this enterprise database product, blah, blah, blah, blah, blah. Write a nice email for me and the thing sends back the email and then guess what? That person's particular shows up in somebody else's interaction. So you can have a data leakage through this mechanism, code leakage through this mechanism, like you pointed out. So there is one big problem. I think the other problem is that you may get some very compelling output back from ChatGPT, but it turns out that output may actually be using copyright infringing content created by somebody else.

Vivek Vaidya:

So now if you take that and put it in your application, it becomes your responsibility.

Rama Ramakrishnan:

Right. It potentially becomes your problem, right? Because it's not clear that you can just pass on the liability to somebody else.

Vivek Vaidya:

Correct.

Rama Ramakrishnan:

So I think that picture is also very murky, which is why I think a lot of the open source, large language model efforts are very focused on making sure the data on which the language model is trained is sort of, I think they have good word for it. I think it's called permissive licensing or something like that. That's the phrase they use where they basically say, look, if you are a content provider, you can totally opt out of your data being used for this thing, number one. Number two, they go out of their way to look for content where it's explicitly stated they can use it on stuff like this. So they're doing it. I think many of the models on hugging face, for example, the star coder model they released, which is the LLM assisted coding product, I think they're very, very careful. A lot of their efforts went into making sure that they used code bases where there was explicit permission to use it.

Vivek Vaidya:

Interesting.

Rama Ramakrishnan:

And I think that's going to certainly, I think, have much more momentum going forward because nobody wants to be liable for using some random stuff that they had no idea they were using.

Vivek Vaidya:

I'm surprised, and this may already have happened, I'm not just aware of it. I think there's going to be an open source license, like the MIT license or Apache license, a modification of it where it'll explicitly say that, yes, you can use this data for, or make it a permissive license for generative AI training and whatnot.

Rama Ramakrishnan:

Yeah, I think you're right. That feels like that that'll probably happen very soon.

Vivek Vaidya:

So kind of just shifting gears slightly, all of this stuff now that generative AI can do ChatGPT can do, what does it do to jobs? There's all this talk about people are going to lose their jobs, AI's going to take over. What do you think about all that?

Rama Ramakrishnan:

Boy. I think it's a very difficult question, right? Very complex question. And just a caveat, I'll tell you what I have sort of gleaned from what I've seen so far. Obviously these comments are speculator, but this what I'm thinking right now and it's subject to change tomorrow. So that's said, I think that maybe we can divide the conversation to maybe three existing jobs. The second part is what does it do in terms of being able to do things in your current business that you just couldn't do before job-wise? And then the third thing is, what new jobs is it going to create? So I think in terms of the first existing jobs, I feel maybe, first of all, I think it's much more useful to not think about a job as a job monolithically, but think about a job as really a collection of individual tasks. And then figure out for each task, what is the likelihood of it being automatable using something like ChatGPT?

Because I think that the task is a smaller discrete unit where you can easily think about substitutability using something like ChatGPT. So from that perspective, I think that, for example, there was a UPenn study that came out few weeks ago which says that if I recall, for over 80% of jobs, they thought that each one of those jobs, at least 10% of the tasks that you do as part of those jobs is automatable. And that's actually a big number, 80% of jobs.

Vivek Vaidya:

 For 80%? So let me just make sure I have this right. For 80% of the jobs, 10% of those jobs...

Rama Ramakrishnan:

At least 10% of the tasks...

Vivek Vaidya:

... tasks in those jobs were automatable using AI?

Rama Ramakrishnan:

Yeah, exactly. And of course it's an early estimate, but it seems reasonable to me because of the ability of large language models to create human sounding go ahead and text and things like that. A lot of knowledge work basically is up for some level of automation, right? So from that perspective, I think that... And we are seeing evidence of it already, where someone did a study where they actually tried, did a test control experiment in a call center organization. And they found that the output of the group that was using these tools was 14% higher than the output of the control group.

Vivek Vaidya:

Oh, wow.

Rama Ramakrishnan:

One four, 14%. So it's clearly, it's a productivity booster, right? You can get more output for the same input, right? But I think the question of what does it mean for jobs? I think it's actually a bit more involved in my opinion. And the way I think about it is that, so I think the best way to think about it is think of something like ChatGPT as a technology that has basically reduced the unit cost of producing knowledge work output. The unit cost has gone down. So when something becomes cheaper per unit to produce, there are two things that can happen at the extremes. One extreme is that you keep the current level of people that you have and you arm them with ChatGPT, and then just ramp up your output, right? That's one extreme.

The other extreme is like, "Well, I want to keep the output flat, and then I'm just going to basically eliminate a bunch of jobs so that I have many, few fewer people generating the same output." Those are the two extremes, right? Yeah. The question is, for any particular job, which of these two, where on this continuum are you going to lie, is the question.

Vivek Vaidya:

It's a very good way of looking at it because most people are right now looking at the pessimistic outlook, which is jobs will be eliminated because people want to keep output constant. But there is another optimistic view, which is, "Well look how much more you can do with the workforce you have, how much more efficient you can make them." And I think more people should be talking about that.

Rama Ramakrishnan:

Yes, I agree. But I think it really comes down to, I think of it as what is the incremental value of increasing your output? So imagine you're running a software company and you have run many, so you know what I'm talking about. So you may have a marketing team that that is producing content for your blog and social media and things like that. And then you have obviously your software engineering team. And let's say, and we both know that ChatGPT can certainly generate marketing content really well. It can obviously help you with coding really well. So the question is what should we do?

 So you might decide that the marketing team, you don't need to ramp up the output because you already sort of maxed out on it. You're putting out a lot of content. If you increase the content even more, it's just going to clutter the marketing sphere and maybe not good for your company to be viewed as too spammy. So you may decide what, "You know what, I'm not going to increase the output. The marginal value of an incremental unit output is zero. So I'm actually going to reduce my marketing team."

Vivek Vaidya:

Or another way of looking at that is because ChatGPT makes marketing content generation more efficient, I can perhaps do more experiments, right?

Rama Ramakrishnan:

Yes.

Vivek Vaidya:

And I'm not spamming, but I'm generating the same quantity but the mix is different, right?

Rama Ramakrishnan:

Yeah, you could totally do that. You could totally do that. But it might turn out that you can even experiment with a different mix and so on, but perhaps you don't need as many people.

Vivek Vaidya:

Sure.

Rama Ramakrishnan:

And maybe at the margin you reduce slightly, right? So that's one. But if you look at the software engineering side, you may be like, "You know what, I can release features at twice the rate that I've been able to do, and I know that my competition is going to do that." I almost don't have a choice. So I'm going to keep my engineering team as it is, I'm just going to make sure every single person is using copilot or something like that so that I can just ramp up the output because I know the increased output is going to be so valuable to my business. In fact, if I don't do it, it's going to be value destroying.

So I think even in a small software company microcosm, you can see how the dynamics plays out, right? It really depends on the incremental value of the output. So I think if you aggregate these decisions over the whole economy, I don't know what's going to happen, right? It's very difficult to know what is going all going to net out as.

Vivek Vaidya:

Well, I think it's a great framework for leaders to have. And depending on the function, you can think about this incremental increase in value, whether you want it or not, or what happens when you suddenly ramp up the output, like in the case of marketing example you gave. So it's a good framework for people to have as they think about how to in integrate ChatGPT into their day-to-day business. It's interesting as you're saying all that, the kinds of applications that people are going to be building now will be more workflow driven, will have more aspects of collaboration, et cetera as well. It's not going to be so much, "Oh, my model is better than your model." It's how do you take the model and build value-driving, value-generating applications on top of it?

Rama Ramakrishnan:

Absolutely. I could not agree more. I think that the LLM is going to be one core part of the whole thing. And as you know, you will need databases, you will need security, you will need capability, you will need scalability. You'll need front and back and pre-processing, post-processing, the list goes on. So we've all seen that movie before. So I think those things are all going to become super important. But I do think that people, entrepreneurs who really understand the workflow for a particular business process, they are in a great spot if you ask me. Because if you think about a business process and you ask yourself, "If I have an AI co-pilot for this process, how can I dramatically improve the outcomes that come from this process?"

Well, guess what? You need to embed the LLM with the right wrappers around it into the right place in the workflow so it doesn't intrude what people do, it augments them and things like that. And I think that calls of a deep understanding of the business, the domain, the problem, and stuff like that. So I feel that if an entrepreneur has been working in a insurance file claims review, for instance, or drug discovery or something. My God, it's such a cool time to be alive because you can take all that knowledge you have of the domain, and you can work with folks who know how to work with LLMs, and you can create this beautiful sort of hybrid thing in which the LLM is injected in just the right places so as to maximize the final throughput of whatever you're producing.

Vivek Vaidya:

100%, yeah. And that is our core thesis. As now, like everybody else in the world, we're exploring generative AI in earnest as a category in which to build companies and products like that where we can take business processes and figure out where do we inject LLMs to drive value? So let's explore another topic, Rama, which I think is germane to this conversation. What about risks? All this data flowing around people sending data left and center, as you said, we don't even know what data all these models are trained on. We were talking about permissive licensing and things like that. Is there going to be regulation? How is governance going to occur for all of this, do you think?

Rama Ramakrishnan:

Gosh, yeah. So maybe before we talk about regulation and governance, just a quick sort of comment on the risks for any sort of company that's beginning to use LLMs or to help augment some of their workflows, I guess think it's worth remembering that LLMs, like ChatGPT, while they can be brilliant one moment, they can be really dumb the next moment. So for example, that output could be just factually wrong. It could be toxic, it could be biased, it could have the wrong tone or it may not just do the things you want it to do. It may answer a different question accurately, who cares? So given all that, I feel that it's very, very dangerous to use today's LLMs in any sort of automated lights out fashion. I think it's just dangerous.

Vivek Vaidya:

Interesting. Yeah.

Rama Ramakrishnan:

I think it's really important to have some way of validating the output, confirming that it's good before it sees the light of day by an external party or a customer or an employee or a partner or something. And I think companies which are beginning to experiment with this, they always have a human in the loop right now, right? And so I feel like that's rule number one. Have some way to check validity of the output before you actually let anybody else see it. Because it's going to be very difficult to walk back some of these things if there are blowups. So that's the first thing to remember. But I think moving more broadly to the idea of regulation and governance, I think there's a lot of talk about the risks caused by these systems. Most notably this whole risk, extinction risk has been much talked about, right? The last couple of weeks.

 In my opinion I think it's the extinction risk sort of is overblown. I feel actually it is distracting us from a whole bunch of other risks that are all too real right now. For example, the risk of, for example, a whole bunch of job loss is going to happen. There's a whole bunch of cybersecurity risks that are already happening. Because clearly the bad guys can download the open source model like you and I can. And then there is a whole bunch of risk around income inequality being exacerbated by the fact that the folks who are already well off are going to benefit disproportionately from this technology. There is the misinformation, disinformation risk. Our information space is going to get much more easily polluted with a whole bunch of very realistic sounding stuff, nonsense. And what that means is that I think if you sort of feel that you can't trust any piece of information that's out there, then you will sort of stop trusting all information, right? Exactly.

Vivek Vaidya:

I could create a video of you, you're in Boston right now, I could create a video of you sitting right here. And there would be no way for you to dispute that or it'd be very hard for you to dispute that.

Rama Ramakrishnan:

Exactly. It puts a tax on everything when that happens. So those are all incredibly real risks that we have right now. And I think it's not very clear how we actually go about addressing them. But there have been many examples in history where there were similar problems that were considered and the government did something, right? So for example, I heard there's an analogy somewhere where they said, look, obviously it's very easy, it's very valuable if you can print counterfeit money, right? So that's the ultimate deep fake.

Counterfeit money is the ultimate defect because you could use it without the defects. But the thing is, clearly it's not a huge problem. We have it under control. It's not like the economy is awash in counterfeit money. It's a very small fraction of the economy. And how does it work? Because we have regulation, we have enforcement, we have a whole infrastructure that is designed to make sure that problem is kept as low as possible. So I'm sure there are similar mechanisms we can use to control other kinds of problems that AI is going to cause. So I feel like there is a sort of a role for thoughtful regulation on those kinds of risks, and I think we should do that. At the same time, I feel that we should make sure that we don't sort of regulate the open source community regulate startups and things like that, or AI more broad broadly. I think it's dangerous to actually try to slow down the rate of development we have here for a couple of reasons.

One reason of course is that I think it's one of the greatest things we have ever come up with, and it's going to lead to all sorts of amazing benefits. Obviously, we have already solved protein folding, what's next? So it's like an unthinkably hard problem that we actually take it for granted now, right? Yeah. It's been solved, and I think it's going to do amazing things like that in the future. And that's one reason. But people may disagree with me on that. Maybe they think I'm just being overly optimistic about this stuff. But there's another even better reason in my opinion, which is that the bad guys are not going to be sitting still, right? They're not going to be sitting around saying, "Oh, sure. Yeah, I won't do anything until the next six months. Thank you." They're not going to do it.

So if the bad guys are not going to do it, then gosh, the worst thing we can do is to be throttling our development and putting a pause on it. So just to make sure we are ahead of the bad guys, we need to keep doing this thing [inaudible 00:42:34] avoid the negative scenario, but I also think let's embrace the positive scenario is as compelling.

Vivek Vaidya:

No, I think this balanced view is what we need, where you need to be extremely cognizant of the risks, as you were mentioning earlier and, not but, invest in development because the potential for progress is just huge. So one final question for you, Rama, you mentioned earlier, what a time to be alive. So does the practice part of you wants to come out and do something again?

Rama Ramakrishnan:

Yeah, that's a good question, Vivek. I think about it every so often, but I'm actually really happy being an educator. Because fundamentally, I'll be totally candid with you, what I really love is to learn all this cool stuff. And basically I'm being paid to indulge my love of learning. The only price I have to pay is to occasionally take what I learn and try to distill it into something that I can teach other people with. But that's a good forcing function anyway, because otherwise I'll just be reading something and nothing good comes out it right out it.

So I'm kind of happy doing what I'm doing. I do miss the occasional, "Gosh, wouldn't it be nice to be part of a team and try to conquer the mountain or something?" You know what I mean, the whole startup working with a bunch of people that you like and respect on a common mission. I miss that because I think being an academic is a bit of a solitary pursuit, frankly. But except for that, I'm actually very happy being in the space and working with students and advising students and hearing about all the exciting companies that they're trying to build and trying to be helpful knowing that.

Vivek Vaidya:

And I'm sure you get calls from folks in your network to be an advisor, and so you get to be part of the innovation process indirectly as well, right?

Rama Ramakrishnan:

Exactly. So yeah, I'm an advisor to a bunch of startups like you pointed out, and that's really been fun because I sort of stay plugged into the ecosystem to see what's going on. But it's nice because it's sort of like a portfolio of things. I teach, I learn, I do research and I advise companies and the whole thing is sort of a nice self-reinforcing ecosystem. So I'm happy with that.

Vivek Vaidya:

Well, Rama, thank you so much for joining me today on this episode of The Closed Session. This was such an enlightening conversation. Lots and lots of very interesting and actionable takeaways that I am certain our listeners kind of took away from this. So thank you again. And it's been fun talking to you. We could have gone on for another three hours on this. I know. But another time.

Rama Ramakrishnan:

You're very welcome, Vivek. It was a real pleasure to be on the podcast with you. I think your questions were very thoughtful and it was really fun to discuss these questions with you. And I wish you all the very best for Superset. I think you guys are doing amazing things. I occasionally check the website and see what's going on. And it's always something new is going on.

Vivek Vaidya:

Well, thank you.

Rama Ramakrishnan:

It's very exciting and I hope one of these days we get to meet in person and catch up.

Vivek Vaidya:

We will definitely do that. It might happen sooner than you might imagine, because there is a good chance I'm going to be in your neck of the woods over the summer. Date's still being finalized but when I'm there, I'll definitely ping you. We can grab lunch or something.

Rama Ramakrishnan:

You absolutely must.

Vivek Vaidya:

 All right.

Rama Ramakrishnan:

 All right. I look forward to it.

Vivek Vaidya:

Thank you, Rama. Thank you everyone for joining us. Please don't forget to sign up for our newsletter and to stay up to date and we'll see you next time. Thank you very much.

Hide

Get our monthly newsletter in your inbox.

Oops! Something went wrong while submitting the form.
Written By
Written By
Read next

Introduction to The {Closed} Session

In the first episode of The Closed Session, meet Tom Chavez and Vivek Vaidya, serial entrepreneurs and podcast hosts.

read more

Starting From Scratch

In the second episode of The Closed Session, Tom and Vivek discuss the framework for starting your own company from scratch, and the three dimensions that should be taken into account.

read more

The Business Plan

You’ve decided to launch a business, but before you hurtle blindly into the breach, you need a bulletproof plan and a perfect pitch deck to persuade your co-founders, investors, partners, and employees to follow you into the unknown.

read more

Early-Stage Funding Do’s and Dont’s

In this episode of The Closed Session, Tom and Vivek talk about dilution, methods, mindset, benchmarks and best practices for raising investment capital for a new tech startup.

read more

Early Team Formation

Now that you've written the business plan and raised money, it's time to recruit your early team. In this episode, Tom and Vivek cover the do's and dont's of building a high-output team - who to hire, how to build chemistry and throughput, how to think about talent when your company is a toddler versus when it's an adolescent.

read more

Creating a Winning Culture: Must-Haves, Memes, and Tips

read more

Building a Kickass Product & Technology Engine

read more

Women in Tech

read more

How to Interview for a Startup

read more

Is Tech Stingy? The Case for Doing Well *and* Doing Good

read more

And, we’re live at super{set}!

Welcome to Season 2 of The Closed Session! In this first episode of 2020, Tom and Vivek talk about the five companies super{set} launched in 2019 and the lessons they’re learning as they go.

read more

To Sell or Not to Sell

read more

Quarantine Edition: Let the Rants Unfurl

read more

Equity and Inclusion

Tom and Vivek talk about inclusion and reflect on their personal experiences as brown guys in tech. Inclusion feels like a moral imperative, but does it really make for stronger, better companies? Are there unintended consequences of acting on good intentions to 'fix' an inclusion problem at a company? Why is tech so lacking in diversity, and what can we do to get it right?

read more

Big Tech and Regulation

The drums are beating for Big Tech, and for good reason. In this episode, Tom and Vivek break it all down and explain why you need to watch your wallet, or at least raise your antenna, whenever Google or Facebook say they're making a new product decision "to protect user privacy." Exactly how do their product decisions erode competitive markets and our own data dignity? Recorded at the tail end of 2020 before all of the post-election events unfolded, this episode explains exactly how the major platforms abuse data, why you should care, and what we can do to fix it.

read more

super{set}’s Spectrum Detoxifies The Online Space

We are living in a time of extraordinary concern about the negative consequences of online platforms and social media. We worry about the damage interactive technologies cause to society; about the impact to our mental health; and about the way that these platforms and their practices play to our most destructive impulses. Too often, the experiences we have online serve only to polarize, divide, and amplify the worst of human nature.

read more

Back to the Office, Kinda Sorta

With vaccines on the horizon, the idea of getting back to the workplace doesn't seem so far-fetched anymore. In this episode of The Closed Session, Tom and Vivek discuss what it's been like working from home, their likes, dislikes, and lessons learned. What pandemic habits are here to stay, and what pre-pandemic routines are likely to re-emerge? Between the 'back-to-workers' and the 'work-from-homers,' Tom and Vivek wonder whether a middle course is within reach.

read more

To SPAC or not to SPAC

Harpal Sandhu, a Silicon Valley veteran and friend of super{set}, joins Vivek and Tom and explains what the excitement about SPAC's is all about. How did we get from IPO's to SPAC's? What's a PIPE? And why does the $10 price show up? In this episode you'll understand why entrepreneurs might prefer a SPAC and how they navigate its possibilities and pitfalls with investors.

read more

From Watsonville To The Moon

This post was written by Habu software engineer, Martín Vargas-Vega, as part of our new #PassTheMic series.

read more

Not Just On Veterans Day

This post was written by Ketch Developer Advocate, Ryan Overton, as part of our #PassTheMic series.

read more

The Balancing Act For Women in Tech

This post was written by Ketch Sales Director, Sheridan Rice, as part of our #PassTheMic series.

read more

The Studio Model

What’s a startup studio? Is it just “venture capital” with another name?

read more

We don’t critique, we found and build.

The super{set} studio model for early-stage venture It is still early days for the startup studio model. We know this because at super{set} we still get questions from experienced operators and investors. One investor that we’ve known for years recently asked us: “you have a fund — aren’t you just a venture capital firm with a different label?”

read more

New Venture Ideation

Where do the ideas come from? How do we build companies from scratch at super{set}?

read more

Silicon Valley’s Greatest Untapped Resource: Moms

This post was written by MarkovML Co-Founder, Lindsey Meyl, as part of our #PassTheMic series.

read more

Good Ideas, Good Luck

Coming up with new company ideas is easy: we take the day off, go to the park, and let the thoughts arrive like butterflies. Maybe we grab a coconut from that guy for a little buzz. While this describes a pleasant day in San Francisco, it couldn’t be further from the truth of what we do at super{set}. If only we could pull great ideas out of thin air. Unfortunately, it just doesn’t work that way.

read more

Data Eats the World

The wheel. Electricity. The automobile. These are technologies that had a disproportionate impact on the merits of their first practical use-case; but beyond that, because they enabled so much in terms of subsequent innovation, economic historians call them “general-purpose technologies” or GPTs...

read more

The Four Types of Startup Opportunities

In our last post, we discussed how data is the new general-purpose technology and that is why at super{set} we form data-driven companies from scratch. But new technologies are a promise, not a sudden phase change.

read more

VCs Write Investment Memos, We Write Solution Memos

When a VC decides to invest in a company, they write up a document called the “Investment Memo” to convince their partners that the decision is sound. This document is a thorough analysis of the startup...

read more

People, First

What does it mean to be a super{set} co-founder and who do we look for? Why is the Head of Product the first co-founder we bring on board?

read more

The super{set} Entrepreneurial Guild

Has someone looking to make a key hire ever told you that they are after “coachability”? Take a look at the Google ngram for “coachability” — off like a rocket ship since the Dot Com bubble, and it’s not even a real word! Coaching is everywhere in Silicon Valley...

read more

Why Head of Product is Our First Co-Founder

At super{set}, we stand side-by-side and pick up the shovel with our co-founders. Our first outside co-founder at a super{set} company is usually a Head of Product. Let’s unpack each portion of that title....

read more

Why I'm Co-founding @ super{set}

Pankaj Rajan, co-founder at MarkovML, describes his Big Tech and startup experience and his journey to starting a company at super{set}.

read more

Too Dumb to Quit

The decision to start a company – or to join an early stage one – is an act of the gut. On good days, I see it as a quasi-spiritual commitment. On bad days, I see it as sheer irrationality. Whichever it is, you’ll be happier if you acknowledge and calmly accept the lunacy of it all...

read more

The Product Heist

Tom and Vivek describe how building the best product is like planning the perfect heist: just like Danny Ocean, spend the time upfront to blueprint and stage, get into the casino with the insertion product, then drill into the safe and make your escape with the perfect product roadmap.

read more

Founder and Father: A Balancing Act

Making It Work With Young Kids & Young Companies

read more

Early Stage Customers

Tom and Vivek discuss what the very first customers of a startup must look and act like, the staging and sequencing of setting up a sales operation with a feedback loop to product, and end with special guest Matt Kilmartin, CEO of Habu and former Chief Revenue Officer (CRO) of Krux, for his advice on effective entrepreneurial selling.

read more

Overheard @ super{summit}

Vivek Vaidya's takeaways from the inaugural super{summit}

read more

How I Learned to Stop Optimizing and Love the Startup Ride

Reflections after a summer as an engineering intern at super{set}

read more

Why I Left Google To Co-found with super{set}

Gal Vered of Checksum explains his rationale for leaving Google to co-found a super{set} company.

read more

The Era of Easy $ Is Over

The era of easy money - or at least, easy returns for VCs - is over. Tom Chavez is calling for VCs to show up in-person at August board meetings, get off the sidelines, and start adding real value and hands-on support for founders.

read more

The super{set} CEO

Tom and Vivek describe what the ideal CEO looks like in the early stage, why great product people aren’t necessarily going to make great CEOs, and what the division of labor looks like between the CEO and the rest of the early team. They then bring on special guest Dane E. Holmes from super{set} company Eskalera to hear about his decision to join a super{set} company and his lessons for early-stage leadership.

read more

How To Avoid Observability MELTdown

o11y - What is it? Why is it important? What are the tools you need? More importantly - how can you adopt an observability mindset? Habu Software Architect Siddharth Sharma reports from his session at super{summit} 2022.

read more

When Inference Meets Engineering

Othmane Rifki, Principal Applied Scientist at super{set} company Spectrum Labs, reports from the session he led at super{summit} 2022: "When Inference Meets Engineering." Using super{set} companies as examples, Othmane reveals the 3 ways that data science can benefit from engineering workflows to deliver business value.

read more

Infrastructure Headaches - Where’s the Tylenol?

Head of Infrastructure at Ketch, and Kapstan Advisor, Anton Winter explains a few of the infrastructure and DevOps headaches he encounters every day.

read more

Calling BULLSHIT

Tom and Vivek jump on the pod for a special bonus episode to call BULLSHIT on VCs, CEOs, the “categorical shit,” and more. So strap yourselves in because the takes are HOT.

read more

Former Salesforce SVP of Marketing Strategy and Innovation Jon Suarez-Davis “JSD” Appointed Chief Commercial Officer at super{set}

The Move Accelerates the Rapidly Growing Startup Studio’s Mission to Lead the Next Generation of AI and Data-Driven Market Innovation and Success

read more

Why I'm Joining super{set} as Chief Commercial Officer

Announcing Jon Suarez-Davis (jsd) as super{set}’s Chief Commercial Officer: jsd tells us in his own words why he's joining super{set}

read more

When and Why to Bring on VCs

Tom and Vivek describe the lessons learned from fundraising at Rapt in 1999 - the height of the first internet bubble - through their experience at Krux - amid the most recent tech bubble. After sharing war stories, they describe how super{set} melds funding with hands-on entrepreneurship to set the soil conditions for long-term success.

read more

Startup Boards 101

Tom and Vivek have come full circle: in this episode they’re talking about closed session board meetings in The {Closed} Session. They discuss their experience in board meetings - even some tense ones - as serial founders and how they approach board meetings today as both co-founders and seed investors of the companies coming out of the super{set} startup studio.

read more

Q&A with Accel Founder Arthur Patterson

Arthur Patterson, founder of venture capital firm Accel, sits down for a fireside chat with super{set} founding partner Tom Chavez as part of our biweekly super{set} Community Call. Arthur and Tom cover venture investing, company-building, and even some personal stories from their history together.

read more

Arthur Patterson on Venture Investing

Arthur Patterson, the founder of venture capital firm Accel, sits down for a fireside chat with super{set} founding partner Tom Chavez as part of our biweekly super{set} Community Call.

read more

Four Tips for a Distributed Workforce

This month we pass the mic to Sagar Gaur, Software Engineer at super{set} MLOps company MarkovML, who shares with us his tips for working within a global startup with teams in San Francisco and Bengaluru, India

read more

Arthur Patterson on Company Building

Arthur Patterson, legendary VC and founder of Accel Partners, sits down with Tom Chavez to discuss insights into company building. Tom and Vivek review the tape on the latest episode of The {Closed} Session.

read more

7 Ways to Turn an Internship Into a Job at a Startup

Chris Fellowes, super{set} interned turned full time employee at super{set} portfolio company Kapstan, gives his 7 recommendations for how to turn an internship into a job at a startup.

read more

Frida Polli, CEO and co-founder of pymetrics

Kicking off the fourth season of the {Closed} Session podcast with a great topic and guest: Frida Polli, CEO and co-founder of pymetrics, which was recently acquired by Harver, joins us to talk about the critical role that technology and specifically AI and neuroscience can play in eliminating bias in hiring and beyond.

read more

Diamonds in the Rough

Obsessive intensity. Pack animal nature. Homegrown hero vibes. Unyielding grit. A chip on the shoulder. That's who we look for to join exceptional teams.

read more

The RevOps Bowtie Data Problem

Go-to-market has entered a new operating environment. Enter: RevOps. We dig into the next solution space for super{set}, analyzing the paradigm shift in GTM and the data challenges a new class of company must solve.

read more

Alysa Hutnik, Chief Privacy and Data Security Architect @ Ketch

We are delighted to share our new episode of the {Closed} Session podcast with guest Alyssa Hutnik. Alyssa looms large in the privacy world, and she’s been thinking deeply about the intersections of data, technology and the law for nearly two decades. She’s also the Chief Privacy and Data Security Architect at Ketch, a super{set} company, as well as a lawyer. Hope you enjoy the episode!

read more

The Information: "TikTok Is Not the Enemy"

Tom writes a nuanced take on the TikTok controversy and outlines ethical data principles that will restore people’s sense of trust and offer them true control over how and when they grant permission for use of their data.

read more

boombox.io Raises $7M to Build Out Creator Platform for Music Makers

super{set} startup studio portfolio company’s seed funding round was led by Forerunner Ventures with participation from Ulu Ventures Raise will enable boombox.io to accelerate product development on the way to becoming the winning creator platform for musicians globally

read more

Building the Creator Platform for Music Makers at Boombox.io

On the heels of boombox.io's $7M seed fundraise led by Forerunner, Tom Chavez and Vivek Vaidya sit down with boombox co-founders India Lossman and Max Mathieu for a special episode straight from super{summit} 2023 in New Orleans!

read more

From Chords, to Code, to Chords Again: The Story Behind Boombox.io

super{set} founding general partner Tom Chavez wasn’t always set on a life of engineering and entrepreneurship – music was his first love. For a time, he was determined to make a career out of it. With boombox.io, Tom has combined the best of both worlds into a product that inspires and delights both the engineer and the musician.

read more

Horizontal Scaling at super{summit}

Vivek gives us the rundown on what the hive is buzzing about after super{summit} 2023: how to 'horizontally scale' yourself.

read more

Generative AI + Creative Work with Big Technology's Alex Kantrowitz

Alex Kantrowitz, journalist and author of Big Technology, joins Tom and Vivek in the studio to discuss his road to journalism, ad tech, and the business and ethical considerations of generative AI.

read more

Jamming with Habu’s Matt Kilmartin on Partnership Strategy

Discover how Habu, a trailblazer in data clean room technology, utilizes strategic partnerships with giants like Microsoft Azure, Google Cloud, and AWS to expand its market reach and foster the potential of an emerging category. Learn from CEO Matt Kilmartin's insights on how collaboration is the secret sauce that brings innovation to life.

read more

MIT Professor Rama Ramakrishnan on How ChatGPT Works

MIT Professor Rama Ramakrishnan joins Vivek on the pod to delve into the evolution of Generative AI and ChatGPT, as well as his own journey as an entrepreneur turned business school professor.

read more

Pivots and Possibilities

Discover how lessons from law enforcement shape a thriving tech career. Ketch Sr. Business Development Representative Brenda Flores shares a bold career pivot in our latest "Pass the Mic" story.

read more

The Future of Work and Talent in Tech

Does it matter where you go to college? Should the SAT be abolished? Do you have to have a degree in computer science to work in tech? What are the differences between higher education in the US and in India? Why did Tom and Vivek ban Harvard and Stanford degrees from working at their first company?

read more

AI Alignment with Brian Christian of 'The Alignment Problem'

What does ‘AI alignment mean? Can philosophy help make AI less biased? How does reinforcement learning influence AI's unpredictability? How does AI's ‘frame problem’ affect its ability to understand objects? What role does human feedback play in machine learning and AI fine-tuning?

read more

Hold Fast: Game-Changing Wisdom from Seamus Blackley

Creator of the XBox and serial entrepreneur Seamus Blackley joined Tom Chavez on stage at the 2023 super{summit} in New Orleans, Louisiana, for a free-ranging conversation covering the intersection of creativity and technology, recovering back from setbacks to reach new heights, and a pragmatic reflection on the role of fear and regret in entrepreneurship.

read more

An Intro to Product-Led Growth from MarkovML

Want to grow your product organically? This blog post breaks down understanding costs, setting up starter plans, and pricing premium features using MarkovML as an example. Learn how to engage new users and encourage upgrades, enhancing user experience and fueling growth through actionable insights.

read more

Building Tech on a Moving Regulatory Target

In an interview with Ketch co-founder Max Anderson, the focus is on data privacy laws and AI's role. Anderson discusses the global privacy landscape, highlighting Ketch's approach to helping businesses navigate regulations. The conversation also emphasizes data dignity and Ketch's unique role in the AI revolution.

read more

AI Hot Takes: Deepfakes, The Big Stakes, and What to Make

Is AI our salvation or is it going to kill us all? Tom and Vivek roam widely on others’ takes about artificial intelligence, adding their insight and experience to the mix. Along the way they consider Descartes, Ray Kurzweil, Salt Bae, Marc Andreessen among others. If you are looking for a down to earth conversation that tempers the extremes at either end of the debate, this is the one you’ve been waiting for.

read more

Lessons from the Startup Circus

super{set} Technical Lead and resident front-end engineering expert Sagar Jhobalia recaps lessons from participating in multiple product and team build-outs in our startup studio. Based on a decade of experience, Sagar emphasizes the importance of assembling the right engineering team, setting expectations, and strategically planning MVPs for early wins in the fast-paced startup environment.

read more

Navigating the Startup Journey from Launch to Finish Line

Are you a launcher, or a finisher? The balance of conviction, a guiding vision, and the right team to execute it all make the difference between entrepreneurial success and failure. Tom Chavez delves into his journey as a first-time CEO and the invaluable guidance he received from a key mentor.

read more

Understanding The AI “Alignment Problem”

Vivek Vaidya recaps his conversation with AI researcher and author of "The Alignment Problem" Brian Christian at the 2023 super{summit}.

read more

High-Velocity Personal Growth

What's the price you put on personal growth? In his most recent note to founders, super{set} Founding General Partner Vivek Vaidya outlines 7 points of advice for startup interviews and negotiations. Vivek explains his compensation philosophy and the balance between cash and the investment in personal and career growth a startup can bring. Here’s the mindset you need to reach your zenith at a startup.

read more

Harvard Computer Scientist James Mickens on The Ethical Tech Project

Are we walking a tightrope with AI, jeopardizing humanity's ethical core? Is AI more than just algorithms, acting as a mirror to our moral values? And when machine learning grapples with ethical dilemmas, who ultimately bears the responsibility? Harvard's Gordon McKay Professor of Computer Science, James Mickens, joins Tom Chavez and Vivek Vaidya on "The {Closed} Session."

read more

How Boombox Nurtures Customer Collaboration for Success

In a conversation with boombox's co-founder India Lossman, the discussion pivots to the art of fostering customer collaboration in music creation. Lossman unveils how artist-driven feedback shapes boombox's innovative platform, with a glimpse into AI's empowering potential. Understand the synergy between technology and user insights as they redefine the independent music landscape.

read more

ActiveFence Acquires super{set} Company Spectrum Labs

ActiveFence, the leading technology solution for Trust and Safety intelligence, management and content moderation, today announced its successful acquisition of Spectrum Labs, a pioneer in text-based Contextual AI Content Moderation.

read more

How Engineers Should Talk to Customers with Empathy

Do you get an uneasy feeling anytime you get added to a customer call? Do you ever struggle to respond to a frustrated customer? Peter Wang, Product lead at Ketch, discusses how customer feedback can help drive product development, and how engineers can use customer insights to create better products. Learn best practices for collecting and interpreting customer feedback.

read more

Tech Crunch: Answering AI’s biggest questions requires an interdisciplinary approach

Tom Chavez, writing in TechCrunch, calls for new approaches to the problems of Ethical AI: "We have to build a more responsible future where companies are trusted stewards of people’s data and where AI-driven innovation is synonymous with good. In the past, legal teams carried the water on issues like privacy, but the brightest among them recognize they can’t solve problems of ethical data use in the age of AI by themselves."

read more

Spectrum Co-founders Launch Nurdle AI

Justin Davis and Josh Newman, Co-founders of Spectrum Labs (acquired) launch Nurdle to get AI into production faster, cheaper & easier.

read more

Spotlight Series: Gal Vered, Co-founder of Checksum.ai

The {Closed} Session Spotlight Series showcases a different co-founder from the super{set} portfolio every episode. Up first: Gal Vered is co-founder and Head of Product at Checksum (checksum.ai), end to end test automation leveraging AI to test every corner of your app.

read more

The Product Mindset for Engineers

Ever find yourself scratching your head about product management decisions? Join India Lossman, co-founder of boombox.io, as she unpacks the product mindset for engineers. Unravel the art of synergy between PMs and engineers and delve into strategies to enhance collaboration and craft products that users will adore.

read more

Why Headlamp Health is Bringing Precision to Mental Health

Co-founder of Headlamp Health, Andrew Marshak, describes the frustratingly ambiguous state of mental health diagnoses - and the path forward for making mental health a precision science.

read more

Marketing in the Age of AI with Rex Briggs

How is AI steering the future of marketing strategy? With the convergence of AI and marketing tactics, Rex Briggs paints a compelling picture of what's possible: AI agents that revolutionize user interactions, and generative techniques that craft persuasive content. Drawing from his deep expertise in marketing measurement, Rex joins Tom Chavez and Vivek Vaidya to explore the cutting-edge of AI-driven marketing strategies. Listen for insights on harnessing AI's potential in modern marketing.

read more

Tom Chavez in Huffpost Personal for Hispanic Heritage Month

Writing in the Huffington Post: "My Mom Sent Me And My 4 Siblings To Harvard. Here's The 1 Thing I Tell People About Success."

read more

Developer tools that are worth their while: KEDA and Boundary in action

Running cloud platforms efficiently while keeping them secure can be challenging. In this blog post, learn how two of super{set}’s portfolio companies, MarkovML and Kapstan, are leveraging tools like KEDA for event-driven scale and Boundary for access management to remove friction for developers. Get insights into real-world use cases about optimizing resource usage and security without compromising productivity.

read more

Watch: Sandeep Bhandari Fireside Chat

Sandeep Bhandari, Former Chief Strategy Officer and Chief Risk Officer at buy now, pay later (BNPL) company Affirm, joins Vivek Vaidya, Founding General Partner of super{set}, in conversation.

read more

Spotlight Series: Andrew Marshak, Co-founder of Headlamp Health

The {Closed} Session Spotlight Series showcases a different co-founder from the super{set} portfolio every episode. Up now: Andrew Marshak is Co-founder and Head of Product at Headlamp Health (Headlamp.com), a healthtech company bringing greater precision to mental health care.

read more

Philosophy, Data, and AI Ethics with NYT Best-selling Author + Data Scientist Seth Stephens-Davidowitz

From unpacking Google search patterns to understanding the philosophical underpinnings of big data, Seth Stephens-Davidowitz offers a unique lens. As the NYT Best-selling author of “Everybody Lies” and a renowned data scientist, he delves into the ways data mirrors societal nuances and the vast implications for tech and its intertwining with everyday life.

read more

Forbes: 5 Startup Studio Misconceptions

It's still early for the startup studio asset class - and we hear misconceptions about the studio model every day, ranging from the basic confusion of accelerators versus studios to downright incorrect assumptions on our deep commitment to the build-out of every company. Read Tom Chavez' latest in Forbes.

read more

Ringside Tales from Serial Startup Champion Omar Tawakol

Like Rocky Balboa and Apollo Creed, the fiercest competitors can sometimes become friends. Omar Tawakol is a prime example. As the founder and CEO of BlueKai, he went head-to-head with Tom, Vivek, and the 'Krux mafia' for dominance in the Data Management Platform arena. A serial entrepreneur with roots in New York and Egypt, Omar eventually steered BlueKai to a successful acquisition by Oracle before creating Voicea, which Cisco acquired. Today, he's pioneering a new venture called Rembrand (rembrand.com), which innovates in product placement through generative fusion AI.

read more

Spotlight Series: Lindsey Meyl, Co-founder of RevAmp

The {Closed} Session Spotlight Series showcases a different co-founder from the super{set} portfolio in every episode. Today's guest is Lindsey Meyl, Co-founder at RevAmp (rev-amp.ai), a "Datadog for RevOps" platform that offers observability across the revenue engine, monitoring performance, flagging when something is amiss, and determining the root cause of how to fix it.

read more

Why Proprietary Data Is the Linchpin of AI Disruption

Read Vivek Vaidya's latest in CDO Magazine and learn why in this new AI landscape, those who harness the potential of proprietary data and foster a culture of collaboration will lead the way—those who don't risk obsolescence.

read more

MedCity News: It’s Time for the Tech Revolution to Come to Mental Health Diagnoses

Headlamp Health co-founder Andrew Marshak writes in the MedCity News that "We need to take inspiration from the progress in oncology over the last few decades and challenge ourselves to adapt its successful playbook to mental illness. It’s time for precision psychiatry."

read more

What Consumers Think of AI and Their Privacy

Everyone’s talking about AI - so The Ethical Tech Project decided to listen. Joining forces with programmatic privacy and data+AI governance platform Ketch, The Ethical Tech Project surveyed a representative sample of 2,500 U.S. consumers and asked them about AI, the companies leveraging AI, and their sentiment and expectations around AI and privacy. On the latest episode of The {Closed} Session, get an inside look at the survey results in a deep-dive conversation with the team at The Ethical Tech Project.

read more

Why the AI Revolution Will Be Data-Centric

Pankaj Rajan, co-founder of MarkovML, joins super{set} Chief Commercial Officer Jon Suarez-Davis (jsd) to discuss the role of data in gaining a competitive advantage in the AI revolution. Learn the difference between optimizing models and optimizing data in machine learning applications, and why effective collaboration will make or break the next-gen AI applications being created in businesses.

read more

Tech Crunch: Boutique startup studio super{set} gets another $90 million to co-found data and AI companies

Startup studio super{set} has a fresh exit under its belt with the sale of marketing company Habu to LiveRamp for $200 million in January. Now, super{set} is adding another $90 million to its coffers as it doubles down on its strategy of building enterprise startups.

read more

Overheard @ super{summit}

Vivek Vaidya's takeaways from the inaugural super{summit}

read more

How To Avoid Observability MELTdown

o11y - What is it? Why is it important? What are the tools you need? More importantly - how can you adopt an observability mindset? Habu Software Architect Siddharth Sharma reports from his session at super{summit} 2022.

read more

How Boombox Nurtures Customer Collaboration for Success

In a conversation with boombox's co-founder India Lossman, the discussion pivots to the art of fostering customer collaboration in music creation. Lossman unveils how artist-driven feedback shapes boombox's innovative platform, with a glimpse into AI's empowering potential. Understand the synergy between technology and user insights as they redefine the independent music landscape.

read more

Jamming with Habu’s Matt Kilmartin on Partnership Strategy

Discover how Habu, a trailblazer in data clean room technology, utilizes strategic partnerships with giants like Microsoft Azure, Google Cloud, and AWS to expand its market reach and foster the potential of an emerging category. Learn from CEO Matt Kilmartin's insights on how collaboration is the secret sauce that brings innovation to life.

read more

Q&A with Accel Founder Arthur Patterson

Arthur Patterson, founder of venture capital firm Accel, sits down for a fireside chat with super{set} founding partner Tom Chavez as part of our biweekly super{set} Community Call. Arthur and Tom cover venture investing, company-building, and even some personal stories from their history together.

read more

How I Learned to Stop Optimizing and Love the Startup Ride

Reflections after a summer as an engineering intern at super{set}

read more

CalMatters: Why visa reforms benefit not just California’s tech sector but the economy overall

Vivek Vaidya writes that America needs more H-1B workers. Common sense reforms to the program will even the playing field for startups, not Big Tech, to bring innovative talent to American's shores.

read more

Silicon Valley’s Greatest Untapped Resource: Moms

This post was written by MarkovML Co-Founder, Lindsey Meyl, as part of our #PassTheMic series.

read more

The Era of Easy $ Is Over

The era of easy money - or at least, easy returns for VCs - is over. Tom Chavez is calling for VCs to show up in-person at August board meetings, get off the sidelines, and start adding real value and hands-on support for founders.

read more

The Balancing Act For Women in Tech

This post was written by Ketch Sales Director, Sheridan Rice, as part of our #PassTheMic series.

read more

Not Just On Veterans Day

This post was written by Ketch Developer Advocate, Ryan Overton, as part of our #PassTheMic series.

read more

Watch: Sandeep Bhandari Fireside Chat

Sandeep Bhandari, Former Chief Strategy Officer and Chief Risk Officer at buy now, pay later (BNPL) company Affirm, joins Vivek Vaidya, Founding General Partner of super{set}, in conversation.

read more

super{set}’s Spectrum Detoxifies The Online Space

We are living in a time of extraordinary concern about the negative consequences of online platforms and social media. We worry about the damage interactive technologies cause to society; about the impact to our mental health; and about the way that these platforms and their practices play to our most destructive impulses. Too often, the experiences we have online serve only to polarize, divide, and amplify the worst of human nature.

read more

The RevOps Bowtie Data Problem

Go-to-market has entered a new operating environment. Enter: RevOps. We dig into the next solution space for super{set}, analyzing the paradigm shift in GTM and the data challenges a new class of company must solve.

read more

When Inference Meets Engineering

Othmane Rifki, Principal Applied Scientist at super{set} company Spectrum Labs, reports from the session he led at super{summit} 2022: "When Inference Meets Engineering." Using super{set} companies as examples, Othmane reveals the 3 ways that data science can benefit from engineering workflows to deliver business value.

read more

Why I'm Co-founding @ super{set}

Pankaj Rajan, co-founder at MarkovML, describes his Big Tech and startup experience and his journey to starting a company at super{set}.

read more

How Engineers Should Talk to Customers with Empathy

Do you get an uneasy feeling anytime you get added to a customer call? Do you ever struggle to respond to a frustrated customer? Peter Wang, Product lead at Ketch, discusses how customer feedback can help drive product development, and how engineers can use customer insights to create better products. Learn best practices for collecting and interpreting customer feedback.

read more

Lessons from the Startup Circus

super{set} Technical Lead and resident front-end engineering expert Sagar Jhobalia recaps lessons from participating in multiple product and team build-outs in our startup studio. Based on a decade of experience, Sagar emphasizes the importance of assembling the right engineering team, setting expectations, and strategically planning MVPs for early wins in the fast-paced startup environment.

read more

boombox.io Raises $7M to Build Out Creator Platform for Music Makers

super{set} startup studio portfolio company’s seed funding round was led by Forerunner Ventures with participation from Ulu Ventures Raise will enable boombox.io to accelerate product development on the way to becoming the winning creator platform for musicians globally

read more

Building Fast, Scaling Globally

Harshil Vyas joined the super{set} Hive (i.e., portfolio companies community) in March 2023 as Co-Founder of Kapstan and employee number one in India. We jumped on a Zoom recently to talk about accelerated timelines, globally distributed workforces, and what is unique about the super{set} model.

read more

Why I Left Google To Co-found with super{set}

Gal Vered of Checksum explains his rationale for leaving Google to co-found a super{set} company.

read more

High-Velocity Personal Growth

What's the price you put on personal growth? In his most recent note to founders, super{set} Founding General Partner Vivek Vaidya outlines 7 points of advice for startup interviews and negotiations. Vivek explains his compensation philosophy and the balance between cash and the investment in personal and career growth a startup can bring. Here’s the mindset you need to reach your zenith at a startup.

read more

Building Tech on a Moving Regulatory Target

In an interview with Ketch co-founder Max Anderson, the focus is on data privacy laws and AI's role. Anderson discusses the global privacy landscape, highlighting Ketch's approach to helping businesses navigate regulations. The conversation also emphasizes data dignity and Ketch's unique role in the AI revolution.

read more

ActiveFence Acquires super{set} Company Spectrum Labs

ActiveFence, the leading technology solution for Trust and Safety intelligence, management and content moderation, today announced its successful acquisition of Spectrum Labs, a pioneer in text-based Contextual AI Content Moderation.

read more

Forbes: Why A Collaborative Approach Trumps "Lone Genius" In Company-Building

Off the heels of super{set}'s first exit - the acquisition of data collaboration company Habu by LiveRamp for $200 Million - Tom Chavez writes how the super{set} approach to collaboration in company building leads to successful outcomes.

read more

VCs Write Investment Memos, We Write Solution Memos

When a VC decides to invest in a company, they write up a document called the “Investment Memo” to convince their partners that the decision is sound. This document is a thorough analysis of the startup...

read more

Horizontal Scaling at super{summit}

Vivek gives us the rundown on what the hive is buzzing about after super{summit} 2023: how to 'horizontally scale' yourself.

read more

Diamonds in the Rough

Obsessive intensity. Pack animal nature. Homegrown hero vibes. Unyielding grit. A chip on the shoulder. That's who we look for to join exceptional teams.

read more

We don’t critique, we found and build.

The super{set} studio model for early-stage venture It is still early days for the startup studio model. We know this because at super{set} we still get questions from experienced operators and investors. One investor that we’ve known for years recently asked us: “you have a fund — aren’t you just a venture capital firm with a different label?”

read more

Tech Crunch: Answering AI’s biggest questions requires an interdisciplinary approach

Tom Chavez, writing in TechCrunch, calls for new approaches to the problems of Ethical AI: "We have to build a more responsible future where companies are trusted stewards of people’s data and where AI-driven innovation is synonymous with good. In the past, legal teams carried the water on issues like privacy, but the brightest among them recognize they can’t solve problems of ethical data use in the age of AI by themselves."

read more

An Intro to Product-Led Growth from MarkovML

Want to grow your product organically? This blog post breaks down understanding costs, setting up starter plans, and pricing premium features using MarkovML as an example. Learn how to engage new users and encourage upgrades, enhancing user experience and fueling growth through actionable insights.

read more

Forbes: 5 Startup Studio Misconceptions

It's still early for the startup studio asset class - and we hear misconceptions about the studio model every day, ranging from the basic confusion of accelerators versus studios to downright incorrect assumptions on our deep commitment to the build-out of every company. Read Tom Chavez' latest in Forbes.

read more

From Suitcases to Startups: Why Immigrants Innovate

How are immigrants like entrepreneurs? Peter Wang of Ketch arrived in the U.S. at age 7 with two suitcases and a box. Read his story in the latest "Pass The Mic."

read more

From Watsonville To The Moon

This post was written by Habu software engineer, Martín Vargas-Vega, as part of our new #PassTheMic series.

read more

Understanding The AI “Alignment Problem”

Vivek Vaidya recaps his conversation with AI researcher and author of "The Alignment Problem" Brian Christian at the 2023 super{summit}.

read more

Infrastructure Headaches - Where’s the Tylenol?

Head of Infrastructure at Ketch, and Kapstan Advisor, Anton Winter explains a few of the infrastructure and DevOps headaches he encounters every day.

read more

MedCity News: It’s Time for the Tech Revolution to Come to Mental Health Diagnoses

Headlamp Health co-founder Andrew Marshak writes in the MedCity News that "We need to take inspiration from the progress in oncology over the last few decades and challenge ourselves to adapt its successful playbook to mental illness. It’s time for precision psychiatry."

read more

Navigating the Startup Journey from Launch to Finish Line

Are you a launcher, or a finisher? The balance of conviction, a guiding vision, and the right team to execute it all make the difference between entrepreneurial success and failure. Tom Chavez delves into his journey as a first-time CEO and the invaluable guidance he received from a key mentor.

read more

Detecting Software Bugs with AI

Gal Vered is co-founder and Head of Product at Checksum (checksum.ai), an innovative company that provides end-to-end test automation that leverages AI to test every corner of an app. He sat down with Jon Suarez-Davis (jsd) to discuss the exciting problem that Checksum is solving with AI and what Gal likes best about working in super{set}'s startup studio model.

read more

Founder and Father: A Balancing Act

Making It Work With Young Kids & Young Companies

read more

Data Eats the World

The wheel. Electricity. The automobile. These are technologies that had a disproportionate impact on the merits of their first practical use-case; but beyond that, because they enabled so much in terms of subsequent innovation, economic historians call them “general-purpose technologies” or GPTs...

read more

super{set} Fund II: $90 million to intensify our serial focus on data+ai company building

Announcing super{set} Fund II

read more

Too Dumb to Quit

The decision to start a company – or to join an early stage one – is an act of the gut. On good days, I see it as a quasi-spiritual commitment. On bad days, I see it as sheer irrationality. Whichever it is, you’ll be happier if you acknowledge and calmly accept the lunacy of it all...

read more

Good Ideas, Good Luck

Coming up with new company ideas is easy: we take the day off, go to the park, and let the thoughts arrive like butterflies. Maybe we grab a coconut from that guy for a little buzz. While this describes a pleasant day in San Francisco, it couldn’t be further from the truth of what we do at super{set}. If only we could pull great ideas out of thin air. Unfortunately, it just doesn’t work that way.

read more

Tom Chavez in Huffpost Personal for Hispanic Heritage Month

Writing in the Huffington Post: "My Mom Sent Me And My 4 Siblings To Harvard. Here's The 1 Thing I Tell People About Success."

read more

Jeremy Klein on Leading super{set}'s Data-Driven $90 Million Fund II

Jeremy Klein is a general partner at super{set}. Jeremy helped build super{set} from day one alongside Tom Chavez and Vivek Vaidya, designing super{set}’s structure, recruiting co-founders, and laying the plans for a scalable buildout. super{set} recently announced the closing of its $90 million Fund II. He sat down with Jon Suarez-Davis (jsd) to provide insights into the strategic timing and vision behind launching Fund II, his professional journey from a legal expert to an integral part of super{set}'s fabric, and how his unique background and approach have been instrumental in building super{set} and recruiting top-tier co-founders.

read more

The Information: The People OpenAI Should Consider for Its New Board

Tom Chavez writes in The Information that "OpenAI’s board needs a data ethicist, a philosopher of mind, a neuroscientist, a computer scientist with interdisciplinary expertise and a political strategist."

read more

Forbes: Why The Biden-Xi Talks Should Put A Microscope On San Francisco

The prettifying and securing of downtown San Francisco, where super{set} is headquartered, should be the norm - not just for special state visits from the world's dictators. Here are 3 things the city of San Francisco should be doing all year round to make the city better to live, work, and invest in. Read Tom Chavez' latest in Forbes.

read more

Hold Fast: Game-Changing Wisdom from Seamus Blackley

Creator of the XBox and serial entrepreneur Seamus Blackley joined Tom Chavez on stage at the 2023 super{summit} in New Orleans, Louisiana, for a free-ranging conversation covering the intersection of creativity and technology, recovering back from setbacks to reach new heights, and a pragmatic reflection on the role of fear and regret in entrepreneurship.

read more

7 Ways to Turn an Internship Into a Job at a Startup

Chris Fellowes, super{set} interned turned full time employee at super{set} portfolio company Kapstan, gives his 7 recommendations for how to turn an internship into a job at a startup.

read more

Why Proprietary Data Is the Linchpin of AI Disruption

Read Vivek Vaidya's latest in CDO Magazine and learn why in this new AI landscape, those who harness the potential of proprietary data and foster a culture of collaboration will lead the way—those who don't risk obsolescence.

read more

Why Headlamp Health is Bringing Precision to Mental Health

Co-founder of Headlamp Health, Andrew Marshak, describes the frustratingly ambiguous state of mental health diagnoses - and the path forward for making mental health a precision science.

read more

Redefining Customer Experience in Data-Driven Tech Startups

Ted Flanagan, Chief Customer Officer at super{set}-founded Habu, sat down with Jon Suarez-Davis (jsd) to provide insights into how Habu's strategies in customer experience set it apart in the data collaboration market. Learn how customer experience strategies helped Habu land a $200 million after being acquired by LiveRamp in January 2024.

read more

From Chords, to Code, to Chords Again: The Story Behind Boombox.io

super{set} founding general partner Tom Chavez wasn’t always set on a life of engineering and entrepreneurship – music was his first love. For a time, he was determined to make a career out of it. With boombox.io, Tom has combined the best of both worlds into a product that inspires and delights both the engineer and the musician.

read more

Four Tips for a Distributed Workforce

This month we pass the mic to Sagar Gaur, Software Engineer at super{set} MLOps company MarkovML, who shares with us his tips for working within a global startup with teams in San Francisco and Bengaluru, India

read more

Podcast: Tom Chavez on How AI Startups Can Show Us What’s Next in Marketing

Tom Chavez joins the "Decoding AI for Marketing" podcast published by MMA Global and hosted by well-respected international marketing & AI experts Greg Stuart (CEO, Author, Investor, Speaker) and Rex Briggs (Founder/CEO, Inventor, Author, Speaker).

read more

super{set} Celebrates First Exit: LiveRamp to Acquire Data Collaboration Software Startup Habu for $200M

LiveRamp Enters Into Definitive Agreement to Acquire Habu, Reinforcing super{set}'s Unique Company Building Model of Founding, Funding, and Scaling Data+AI Businesses

read more

The Four Types of Startup Opportunities

In our last post, we discussed how data is the new general-purpose technology and that is why at super{set} we form data-driven companies from scratch. But new technologies are a promise, not a sudden phase change.

read more

The Product Mindset for Engineers

Ever find yourself scratching your head about product management decisions? Join India Lossman, co-founder of boombox.io, as she unpacks the product mindset for engineers. Unravel the art of synergy between PMs and engineers and delve into strategies to enhance collaboration and craft products that users will adore.

read more

The Information: "TikTok Is Not the Enemy"

Tom writes a nuanced take on the TikTok controversy and outlines ethical data principles that will restore people’s sense of trust and offer them true control over how and when they grant permission for use of their data.

read more

The super{set} Entrepreneurial Guild

Has someone looking to make a key hire ever told you that they are after “coachability”? Take a look at the Google ngram for “coachability” — off like a rocket ship since the Dot Com bubble, and it’s not even a real word! Coaching is everywhere in Silicon Valley...

read more

Spectrum Co-founders Launch Nurdle AI

Justin Davis and Josh Newman, Co-founders of Spectrum Labs (acquired) launch Nurdle to get AI into production faster, cheaper & easier.

read more

Why the AI Revolution Will Be Data-Centric

Pankaj Rajan, co-founder of MarkovML, joins super{set} Chief Commercial Officer Jon Suarez-Davis (jsd) to discuss the role of data in gaining a competitive advantage in the AI revolution. Learn the difference between optimizing models and optimizing data in machine learning applications, and why effective collaboration will make or break the next-gen AI applications being created in businesses.

read more

Former Salesforce SVP of Marketing Strategy and Innovation Jon Suarez-Davis “JSD” Appointed Chief Commercial Officer at super{set}

The Move Accelerates the Rapidly Growing Startup Studio’s Mission to Lead the Next Generation of AI and Data-Driven Market Innovation and Success

read more

Why I'm Joining super{set} as Chief Commercial Officer

Announcing Jon Suarez-Davis (jsd) as super{set}’s Chief Commercial Officer: jsd tells us in his own words why he's joining super{set}

read more

Why CTOs Should Care About Gross Margins, Cost-to-Serve, and Product Management

Why should a tech exec care about profit and loss? Aren’t our jobs to make the product great, and someone else can figure out how to make the numbers add up? That was my attitude for a long time until I finally appreciated the significance of gross margins for SaaS businesses during the early part of my tenure as the CTO of Krux.

read more

Pivots and Possibilities

Discover how lessons from law enforcement shape a thriving tech career. Ketch Sr. Business Development Representative Brenda Flores shares a bold career pivot in our latest "Pass the Mic" story.

read more

Developer tools that are worth their while: KEDA and Boundary in action

Running cloud platforms efficiently while keeping them secure can be challenging. In this blog post, learn how two of super{set}’s portfolio companies, MarkovML and Kapstan, are leveraging tools like KEDA for event-driven scale and Boundary for access management to remove friction for developers. Get insights into real-world use cases about optimizing resource usage and security without compromising productivity.

read more

Why Head of Product is Our First Co-Founder

At super{set}, we stand side-by-side and pick up the shovel with our co-founders. Our first outside co-founder at a super{set} company is usually a Head of Product. Let’s unpack each portion of that title....

read more