This video features a conversation between Sam Awrabi and Roman Chernin, co-founder of Nebius, about the company's journey in building an AI-native GPU cloud. They discuss Nebius's significant partnership with Microsoft, their approach to product development, the evolution of AI infrastructure, talent acquisition, and the future of AI.
Roman, welcome to New York City. Hi, Sam. Thank you for having me. Yeah, let let's dive right in. I think most people know who you are at this point in the AI community, but why don't you give a quick introduction to yourself and to Nebus. Okay, happy to do it. Uh, I'm Roman. I was in the founding team of Nibbios. Prior Nibios, I spent more than 10 years in Yandex. uh I was head of uh head of search and then head of maps and navigation there and uh in Abios most of my work is focused on go to market. So I was responsible to kick off our like sales and marketing when we started. Now finally we have a real CRO someone who knows how to do it on scale and I shifted back a little bit to the product and uh I'm I'm working with a lot of the people in the company to make sure that we develop our software stack like beyond infrastructure as a service. Uh and we probably will speak why it's important for us. Um, and talking about Nebios in general, I think again uh a lot of people know that uh we have the heritage in Yandex. So the team actually who build it worked together for a long time in Yandex and we led by founder and CEO of Yandex Ark Walsh. uh and we've got like the bunch of very strong engineers uh on different parts of the stack which is I think the real luck for this market. Um, so we have people who actually build and operate at all Yandex infrastructure on scale. So built all the data centers, build hardware. We have the team that work on the cloud uh like software platform and we also have a lot of expertise in AI itself because like Yandex was obviously like AI company before it called AI company. Um, and what we do now is actually building AI cloud. So we uh we built the platform, we built the infrastructure to enable all like all the people who actually built AIcentric AIcentric workloads uh yetric applications uh all the way from small startups to the large enterprises in labs and like hyperscalers now. Yep. Yeah. I think what's interesting about Nebius that I've read and followed the journey is you work with research labs, startups, institutions and now a lot more enterprise adoption and I've seen different customers like Cloudflare etc. Um, and on that note this Microsoft uh partnership y um, you know, speaking more about that I believe it was the largest vendor deal ever in Microsoft history. I think the contract values are 17.4 4 billion to start and it can go up to 19.4 billion based on capacity. Um, walk us through how do you land such a massive deal, largest in Microsoft history? Do you know it's for sure it was the largest vendor deal? I I I I never checked it. So, it's obvious was the largest for us, but I'm not sure that for Microsoft as well. That's what the research said, but I I I believe so. I believe so. No, it's cool. It's cool. It was funny enough that after we signed this deal, maybe next day we felt like the kings of the hill and then the next day openly I signed with Oracle for 300 billion and like we felt like Oracle came in and said like hold my beer guys like uh no it was uh quite a milestone obviously for us and um I think that for us it's important as a justification of our uh of our maturity. So you can imagine that like the the customers like Microsoft when they sign the deal they go all the way through details and um uh enterprise readiness of your of their provider. Um, it's obviously very important from business perspective. So it helped us uh to scale revenue. It helped us to actually raise additional capital. And what I think is undervalued like in the community about this deal is we internally think about it is more like the fuel to to feed our core business and our core business as we call it is to build AI cloud like multi-tenant cloud with addition like with all the software services to serve diversity of the customers. You can imagine that Microsoft mostly need ro infrastructure, ro compute these customers. Uh but our real motivation and uh vision is we build much more than just large bare metal clusters and the deals like Microsoft are letting us to reinvest a lot in the development of the product and scaling the capacity for all the rest of the market. So I think uh it it's really difficult to understate uh uh for the company like us how important to scale fast our competition are with the companies with the largest balance sheets in the world. So we just need to build a lot and yeah the the the Microsoft deal put us on the next level of what we can afford to build for the rest of the customers and that's the most important for us. Uh we even told to to internally to the team that you can think about it as a like weird way to fund raise. So it gives you the capital. This is the the most important gives you the capital to build. Yeah. I mean, I want to dive in on some of this and a lot of what you're saying I agree with. I think the GPU cloud space to a large extent is a financial engineering game where whoever can take in the most financing early days it's equity and debt. As you go public, it's a lot of public equity fundraising and still private market debt fundraising. But to your point and I think that's what makes Nebulus Nebulus really special is the software stack Kubernetes multi-tenency latency uh security getting all of that right so that your end user doesn't have to spend a lot of time making the GPUs work for their use case so with the Microsoft deal, what things internally uh are the biggest challenges to deliver that and then that can be a nice lelay into what are you building that's extremely unique against this fiercely competitive software layer on top of the hardware today. Yeah, with the Microsoft deal, uh the the main kind of the main characteristic of this uh uh partnership is the scale scale right so I think that it will be one of the largest uh in one place deployment of uh in Virginia right in New Jersey New Jersey that's right uh it will be one of the are just for now like for the next our delivery schedule there is next 9 months I think 9 10 months and I think that for time being it will be one of the largest in one place deployment of uh GB300's the newest chips that going there so I think it's a lot of engineering work on scale uh you can imagine that they expect quite a high level of SLA and the operational efficiency. Um, I think we're quite confident uh we can deliver on that uh just because we saw the scale like that back in our days in Yandex and the team that builds it. It's not something they don't know how to deal with. Uh and it's like execution first of all. Then I think what is interesting about uh what we call the core business like softwaredriven business it's also all about scaling fast deploy fast grow but it also the product yes and I think that in general the industry and us as a part of industry kind of figured out what is the infrastructure as a service looks like for the AI world. uh but it's still to be defined what is the p like if you think about cloud classical uh layers you have yas you have pass so what is the pass layer for the AI specialized development what is the developer platform what is the efficient developer platform should look like how it should change together with the change of all the software development that we observe now with all the agentic uh development And so we we've got like all the things together in one time that we need to figure out you we never saw such an such a scale. Then the type of the applications we see are changing. So this AIcentric agentic uh vertical solutions are different from what we saw before. Let's determine and so on. And then the third we see that developer is changing. It's we can probably assume that in two years from now if not faster a lot of interactions with the infrastructure won't be from humans but will be from robots from a agents right so what would it be the right developer platform to deploy and uh and um uh and deploy and change the applications when the type of the kind of contraant is changing. Right. Right. So this is something that really fascinating. I think this is something that's still not defined and figured out. I think this is something that all the classical hyperscalers are not doing great as well yet. Uh and this is where we want to be. Yep. We we we we want to grow with our customers. We want to grow with the y native developers. We want to grow with the uh software uh vendors like Shopify, Cloudflare. We want to go deeper to enterprises and see how we can help them to build this AI native developer platforms and uh pass layer of the cloud. Yeah, I I agree. I mean I my core thesis at banian is AI native technology will be the dominant force of our lifetime and 50% of my focus is on AI native infrastructure. So diving into those specifics, it's also my view that on especially in this now the scale that you're at, it's like check you can handle scale, check, you can handle all the security enterprise requirements, you're deploying one of the largest GPU clusters ever. Uh I know you're the most premium vendor uh with Nvidia as well and you've been awarded that and we can talk about that later. But what are the things your team's ideulating on from a product innovation point of view that would deliver that 5x 10x value increase on the software layer on top of the GPUs compared to the hyperscalers or the core weaves or lambda labs. What what are those core things that your team wants to ship and maybe even with Microsoft or longer term than that? Would love to learn about those specifics. Yeah, it's a great question and a lot of answers to be defined in the future as you have a fun job. Yeah, it's a journey. It's a journey. I think that what we're focusing on is few aspects like first of all this AI infrastructure is extremely expensive and the first thing that you always need to think about how you extract all the percents of efficiency from investments for from your customers, right? So like if someone secured the cluster and pays whatever tens millions of dollars or hundreds millions of dollars or hundred thousands of dollars but it's a really small company. This is quite significant investments. Yeah. And in any infrastructure efficiency is important but here it's just the question of surveness. So the first thing that we always focus on is do we provide the best best performance the best best return of investments the best best TCO for our customers that invest in the infrastructure and there are like a lot of works a lot of work on infrastructure layer starting from the how you construct the data centers and all the way up to the software. So the second thing is I think developer experience is something super important. Uh we already saw it in the AI native kind of world when a lot of new startups, a lot of new companies are started by researchers. And a these are small teams. B they are quite expensive people like you you you you start the new company with the top not people that you acquired from I don't know deep minds and Mark Zuckenberg made them a lot more expensive and so on $100 million thought they a lot of them actually came from the world when a lot of from the large ecosystems when they were surrounded by a lot of people who whose work was to make sure they got all the infrastructure in place, all the tools in place and then they come to the you know west world wild west world when okay how we deal with like here is your cluster like run your job what I never did it. So I think the developer experience in this sense uh is really is really important. How much time they need to spend to actually uh kind of convert their ideas, their experiments in the production, how many people they need to find to deal with infrastructure. Then you can think about it okay like we are in the world of shortage of supply. A lot of people are multiclouding because of that. Every cloud requires different tweaks and uh uh so it's really becomes a a challenge to support all these infrastructure again with small team with the core team members who probably didn't get used to that like to really deal with the raw infrastructure and three uh with the time that so much expensive and not only because we paid pay them a lot but because the competition is harsh. Yeah. Like every month, every week we see the race and uh if you don't deliver like if you lose your time, you don't deliver your new version of the product, new version of the uh of the uh of the uh of the model you you you just suffer. Yep. So I think these two like these two parameters uh efficiency make sure that every GPU is utilized make sure that all the systems in general is works perfectly and then the second is developer experience like time to value TCO this is something really important. Okay. So, is your team doing a lot of forward deployed engineering to provide DevOps support so the researchers can focus on research? So, you're not only getting the networking right, the GPU configuration, everything they need, but then abstracting away any specific DevOps work for that team. Yeah, I think that the again like the players like us are moving evolutionally also. So, first thing we do is we make sure they got the best support. We we we have all this question like wide globe support like we we sit in the slack channels with all of them. We have dedicated solution architects like follow the sun coverage like make sure that if they work in night we will follow them in night. So this all service things but then what we really want to do is the platform that supports them. uh people don't like to talk with the people. They want things just to work and uh a lot of questions we try to answer is how the uh how the platform uh how the software platform should uh should work that people don't need to ask us how API should be designed to help them integrate properly how um user experience looks like uh and what is important I think you you should build it like in design you should build in it in design. A lot of uh a lot of players in this new cloud world came from like with the mentality of uh we deploy clusters, right? But when you deploy the clusters, it may work for the large customers that just need row compute but it will not work for the for developer first kind of approach. uh the time the configuration everything requires uh different approach. So we initially built it as a as a hyperscaler cloud. So we have fully virtualized uh system. We have all the all the resources are provisioned automatically. We have API first development. So everything we have in the platform available for developer through API they don't want to deal with you know with the user interface and so on so forth and it pays back now we see that uh our time to deliver the value is shorter than uh uh many of our uh competitors right we see that we are much more built for partners to integrate with us I mean like value added software developers ers uh we we've got like a lot of integrations and partnerships with the other software developers because again they need the APIs they they don't want to like you know deal with the row infrastructure uh and we see that uh the satisfaction of the customers is quite uh quite high. So uh when they start work with us then they prefer to stay uh and de like build more and more and more and then the other thing that we need to speak about and I think it's the most maybe the most important is how the use case is evolving. Uh we started this market started from uh developing the foundational technologies like in particular models. We started from like model builders mostly training jobs mostly large distributed training jobs. Then I think from maybe late 2024, maybe a little bit earlier, we started to see a lot of demand coming from uh companies who built vertical AI. Uh long time ago, I I I think they were called like GPT rappers with a little bit of um uh disrespect. But in reality now we see that those GPT rappers become the new champions of this market like you can think about Corser, you can think about other people who actually not starting from the foundational model development, they start from the product, they find the use case that works and it happens because the foundational technology is now good enough to unlock so many value in the scenarios. Right. Right. And then they go down the stack. they they may start from the closed source models like uh cloud or chip and then they go at some sta at some stage of their growth like not on the on the on the early stage when they just need to find the product market fit and prove that use case is working but then when they start scaling they actually have the the the the kind of uh very strong reason to to go and see what they can build kind of a part of these closed source models because they need to meet their unit economics on the growth stage and they need to create some technological mode uh around their product and this is the stage when they go and mostly use like open source models like we we we have a like a very strong cap capabilities on the market over the open weight models and sometimes they even go down the stack and start training some specialized models or uh so on. But at this stage they already start seeing the use case. They they they have the data of their customer. They can observe like what is needed and they have the most important they have the use case that they need to build like reasonable economics around. So this is the ne the second wave of the customers that we now see and their requirements are different from uh foundational model builders. They come to us for the inference platform first of all because they not that that savvy in training. They don't run like huge jobs on training but they have a uh comprehensive inference scenarios with the high requirements with reliability because there are real customers behind them with the high requirements to efficiency because uh there is economics behind that and with the high requirements to the robustest of the developer platform because they uh they iterate they they move fast they need to test all the models uh they need to fine-tune them, they need to apply the data, they need they need to tweak the it's normal product development, right? So it's like the iterations the the how fast you can iterate uh uh defines your success, right? So this is the second wave of the consumption and for them we mostly focusing on building inference platform fine-tuning platform data platform to deal with what they got like and and build this flywheel of improve the platform that helps them to build the flywheel of product improvements and it's like it's not infrastructure at all already. Yeah. And then the third category of the customers that we start seeing are enterprises are not they are not AI native and they now start seeing that some of the use cases are working and thanks to the previous category of the AI native vertical AI companies who just show what works and those uh first of all probably ISVS like software vendors like uh uh more technical savvy they start apply it for their uh for their use cases in the enterprise. Um, you want to build better search for your customers. You want to build whatever sales and support. You want to build uh just your common voice. Yeah. Voice engines like coding assist. Yeah. And then each of them each of them also have their core use cases right for their core business. For their core business. Would it be uh I don't know seller experience for the uh for the e-commerce or would it be uh drugs discovery for the health tech company? uh the some of their unique core use cases where again they can leverage their data they can leverage the understanding of the use case better than anyone else on the market and they in a lot of cases not limited with the closed systems like just I will use out ofthe-shelf model they want to build so this is the third wave of the customers and again you can imagine that they don't need just infrastructure They need something else. They need of course they need they they come with all those kind of enterprise requirements security compliance access controls how you integrate with their data how you make sure that the data is not exposed and so on so forth. But then uh their developer experience is also not far beyond just using the infrastructure. Yes, they can have the uh the training scenarios as well when they go down to the slurma kubernetes clusters, but they are like probably more similar to this uh product companies, but with some of their own uh uh their own requirements. They could have some hybrid uh set hybrid infrastructure settings when they want to offload something in the cloud but still utilize something they have on prem. they could um uh they they could have a different like collaboration teams uh experience and they need like the platform that helps a lot of teams collaborate uh across the day like around the data and across a different scenarios and like all all the normal like software development kind of requirements that come when you deal with the large teams. So I think for us again when we think about the road map the product road map that we need to execute we we need to follow these waves of the consumption. Yeah. And we believe that like the ultimate goal is to be cloud AI native cloud AI native cloud in our DNA like coming from the ground up from understanding of each layer of this AI infrastructure and serving those enterprises at the end of the day because we are the big believers that at the end of the day a lot of consumption will come from there right and we still in such early days we still see like first percents of the companies apply AI on scale to the first percents of their use cases right so it will grow like I don't know tens hundreds times and the winners of the product the AI native product development they will also become the enterprises because again their customers will be the enterprises right yeah I mean so many great I really appreciate you taking that approach to explain it on that deep of a level um and I also agree with you on that last point around the AI native scallops of today are going to be the public companies. The the the top seven companies in the S&P 500 will be displaced or look different, I would say, in the next three years. You're going to have Open AI maybe kick one of the top seven out for example. Um, and so I think this what I'm hearing you say, and I know there's a lot of different angles you applied on this, but you're looking at the actual workflow of that user, whether it's a research lab becoming a scallop or if it's a more traditional software company that's now applying where they've seen these other successful AI native companies now applying AI to their business and now they have collaboration needs, they have a hybrid cloud environment needs. Um, or you're looking at like the AI native developer uh companies and their their needs are mostly around hey we need APIs for everything. We want to use these 12 services. We don't want to hack away on it. We we wanted to just get our abstraction layers out of the way and get to market as fast as we can and we want to keep iterating. And so what I'm hearing you say about Nebius is you look at this as a product and a design experience and try to abstract away any complexity and layer that into the product and then excellent support and there's these like different categories that customers Yeah, I think you you you nailed it like you wrapped it very very accurately. So yeah, we we we think that the future of us uh there are a lot of like scenarios of the future. Nobody knows what will happen. There is no future when we don't not scaled enough and be successful. Like in infrastructure business, it's all about scale. But I think that our real success will come if we'll unlock this developer like experience thing. If people will prefer to build with us versus any other cloud on the market, even even if they can get get GPUs from somewhere else for cheaper, but they'd still rather pay more to use Nebulus because there's such a better developer experience. And you're equipped with the tools like you said from data engineering down to if you are doing any training you have your suite of tools there your cloud environment your collaboration absolutely yeah and and it's a big game and uh it's a lot and like no not all answers are clear. No, we we we we know some pieces that we need to implement. we we see that there is a lot of value again to go down the stack like integrate this across all the layers. So if you think about like for example inference on scale, you can in the long term you can win a lot when you for example build the like physical infrastructure like uh hard like clusters and hardware with a understanding in mind how the like scaled inference. What do you do differently there? Is it more memory that you place for? You can you can actually run the different stages of the inference on different uh chips for example. So you can uh do uh this is one thing. Another thing you can do a lot of efficiency from the distributed caching and how you deal with that which is like far beyond just extracting more tokens from the one GPU. uh you can do a lot on the side of mixing different types of workload in one like capacity. Uh imagine I I always tell the example that like uh imagine you have today the product with a very spike demand like you need a lot of GPUs you you've got a lot of customers and you need a lot of GPUs for few hours a day right and all the rest of the day you don't have customers yep in the current market you just literally cannot serve it because you either need to reserve yeah spare comput all the compute for the spike and then you sit idle and your economics not work or you just cannot autoscale when the customers are coming and you lose customers right so this is just like unsolvable problem in the world of dedicated clusters and so on so I think that long term we'll see how different types of workloads are mixing together so when for example to serve the spike you can borrow some compute for less prioritized training job for example and then come it like give it back to the training or we'll see uh I don't know like realtime inferencing leaving together with batch like data processing kind of workloads that have a high throughput but low latency uh requirements so and to make it like efficiently on the platform level. You actually need to you again you you cannot do it in a naive like I rent somewhere else infrastructure and build inference stack on that. You need to go like down to the stack to orchestration and uh make sure that all the stack kind of can support it. Yeah. Yeah. Um, so moving more to your team because you I think it seems probably obvious now because look at all the success, but I think at one time some people uh wrote you guys off as a company. I think that this article that recently came out on Forbes. Uh it it says something like you know one year ago uh Nebulus was seen as a leftover from the Russian tech uh spin-off of Yandex but since then it staged one of the most impressive comebacks in the sector but tell me could it could it say uh there's this recent Forb article and I think yeah Forbes and I mean that the the article title talks about your stock price. Could it you know go into the 400 range? It's like a question. But what what I want to ask you, Roman, is tell us about some of the insights on that journey. Some of the low moments, the moments that could inspire other founders. I don't want all the great things cuz we see those now. But like, how did you guys do that? I mean, I can't imagine that journey from building a really successful company in Yandex to a lot of political stuff happened and then you build a brand new company and now you're you're crushing it. So, tell us about some of the things that people maybe don't know about in your journey and your story with your team. Yeah, I think that uh there is two I would I would say two components there are crucial. First uh it's a founder RAI Wish who built U Yandex um and now he actually the main visionary and the main engine that it's never enough person pushing this whole never enough person. Yeah. Next day after we signed the Microsoft deal, he calls and like okay what's the next so and also very resilient. You can imagine all this uh all this transition of the company was uh also like a very personal story for him because Yandex was his baby like you. Yeah. You spend like 25 years of building something incredible and then like overnight you understand that like something changed. So I think Arat's uh persona is uh something that difficult to overpric in in our journey. uh and second I think this is the luck and partially also um the vision that this company is built around the the group of very strong engineers and it's not only like the number of engineers but it's also the the fact that they combine the experience across the full stack. When we started, there were not so many companies out there, if any, who had experience of building data centers from the green field, building hardware, building software and having AI expertise and at the scale that Yandex behind Google, Yandex was second in scale for search, right? Yeah. We were uh like we were the largest search on a very small market, right? Uh but I think we maybe was like I I'm not sure we were larger than being on the scale but like comparable. The point is like it it was only a few teams in the world had ever gotten that's true. Yandex in general was just a large infrastructure large scale right and Yandex was built with this hyperscalar mentality that we we need to understand every piece of the stack that to do this the right to meet our mission. We can't have the answers yet. We have to find the answers. That's true. So I think that this kind of group of the engineers who saw the scale who have expertise on all the levels and don't have this uh fear of how figuring out the hard how how can we deal with that uh is the second like secret sauce and people were quite surprised. This is part of the history also that um this neo cloud market started when on a scale started to grow like exponentially when the GPT moment happened. That's right. That's right. And we saw it, we understood it, we wanted to be there, but we couldn't because our corporate transition didn't finish. Right. And we were sitting like almost year and a half observing the market and like let us do it, let us do it. But we just could not because we we didn't finalize the separation with the Russian asset Russian assets and we we needed to get like access to the capital and actually like start growing. So I think that people were but all this time we invested in the product. We can we we we we believed that we'll finalize this kind of pre-story of the company and we'll get out and we'll go to the yeah we'll go to the market and we actually like heavily invested in the product and tech during this time and when we came to the market a lot of people were surprised that we come with the product like nobody heard the name I mean I know I was yeah nobody nobody pretty much came out of nowhere yeah yeah like and we were in Europe and then we came to US because we understood that most of the market here in US. Yeah. And then you go public and then it's like haha just kidding now we have a 17 billion deal with Microsoft and I can't imagine what's coming. But yeah but but I but I think that like it just not so many people understood that we started not from the scratch. We started from the scratch from the kind of awareness perspective and everything but we didn't start from the from the scratch with the technology and the team. No or culture like you said that spirit of overcoming challenges not outsourcing the hardest problems. Yeah. And that helped a lot. So I cannot say we didn't have a bad days when we kind of thought can we deal with that like and so on but uh the platform and the engineering kind of efficiency was there uh and I think this is the the second like secret uh ingredient. Yeah. Is it fair to ask or say, I don't know which one it is, but if those challenges didn't happen with the index, Nebulus would have never been born like if a war never happened, all these political issues, what would you guys even be here? Yeah, I think it's a fair question. I like nobody knows, right? So, like we don't know how the history would would go. I think that back in Yandex time, Arati and uh the team was had always the ambition to build something beyond the market we operated, the core market we operated. We we understood that we have the the the technical capabilities to be competitive on the global landscape. But I think that you're also right that we you know it's like you're too successful. Yeah, like why not do what you were doing? And we launched like Yandex went much beyond just being the search and advertisement business. We were the largest right heading business in the country. We were we launched self-driving. We launched uh e-commerce and what not. And uh I think that in the bottom of the heart I feel that if it wouldn't be happen we could still be very successful in a very limited geography. Right. I this journey seems a lot a lot fun to me. So I think you know being at the front of the AI innovation space and powering the builders and what I've learned so much already from this discussion is how obsessive you guys are as a company uh with the product experience and the user experience. It seems like you're not trying to go in and find all the answers more just say let me listen and abstract away what you need from us as a product partner and we'll build that for you instead of trying to be overly visionary. you're really focused on core requirements that they're facing in their unique roles. I think that it just the market is so fast and yet not defined in their final kind of shape, right? That and again there are different approaches. You can be very opinated and build a future that you believe in, right? I think what we try to do, we build the foundations that we believe in like we build the the the infrastructure that robust enough to address different like types of scenarios. Uh a lot of people were asking why would you focus on this software multi-tenency if the market is driven by the largest you know largest customers that actually don't need it or can live without that. And now we see that with the adoption to enterprises that's like that's right. So we we we do the bets right like in a in a general kind of approach but the specific the specific shape of the product will be defined together with the customers. Do you I think you are right here. Yeah I love that. Um, so I want to talk about this area of frenemies right so I'll I'll I'll give some examples and bring it back to Nebia. So we have open AI now uh working with Shopify rolling in Shopify natively into open AI. So now when you're searching, you can buy products um on Shopify natively from OpenAI. Now you have Open AI and this is all in the last week I think of news like Open AAI has a hund00 million plus partnership with data bricks and so now to your point about software vendors uh rolling out um AI and figuring out what works like now they're going in this direction. So those are more like recent examples. But with Nebius, you're partnering with Microsoft yet Microsoft is also a cloud provider. Um, you you could be viewed as competitors or partners obviously here your partners, but how how does that work as a founder in this space guiding this incredible journey? Like how do you guys deal with that uh frenemies kind of dynamic? Because I think people like me are always wondering about that. Uh it's a great question like I think that the examples you gave are in a way easier right uh because the first two are pretty easy mutually beneficial because at the end of the day no no doubts that like open EI will be one of the dominant players and then we also I would say have no doubts that the the the number of use cases are such has such a diversity that the companies like pragmatic companies like Shopify for example they will build something on open AI they will build something inhouse they will build something on open source models and our niche is obviously uh in respect to Shopify as a customer is um to help them with achieve their mission yeah with with the building what they need like and buying the way they want. So then what is more interesting that for example um the Microsoft thing uh and I think that how we deal with that is as I told we we think that this market is like the market of opportunities. You need to take the opportunities that help you to achieve your goals, right? But if you want to build something longterm, you still need to remember what is your core. And in respect to Microsoft deal, we happy to work with them. We really really enjoying kind of the journey and the ability to serve such great customer but we remember that it's not our final play to be only Microsoft kind of partner right we want to build our product we think that the AI native development market is such a huge opportunity in front of us that there is enough of space for everyone And it's obvious that Microsoft is uh will will keep and grow their presence there. But there isn't enough space for us and the companies like us to enter this market and deliver some value uh that maybe uh Microsoft will not be focused on or maybe in some things will be better and I I'm not comparing with Microsoft. I just say that uh like in general the the market of cloud providers that consider it to be done deal like there the it's even not a red ocean it's just like done market now it's opened again just because of the this extraordinary growth of new use cases that need the new solutions and new developer platforms and the new infrastructures and so so I think that being going to the principles. It's a being opportunistic and see where where are the fuel for your growth and b remember what what you want to build like long term and don't forget to prioritize it like on top of uh uh like the opportunities that lie ahead of you. Yeah. Because it seems like what I'm hearing you say is partner where it makes sense to grow your business and grow their business. It's mutually beneficial, but at the same time stay laser focused on your greater mission of building the best product experience and we don't know where we where we will end you know like the the the even the final business model can change because at the end of that