CreatorML (YC W23) helps the world's biggest YouTubers predict which topics, titles, and thumbnails will maximize views and ROI using machine learning. Founder Charles Weill migrated from ECS Fargate to Porter on AWS to streamline their infrastructure, resulting in a total cost that's not any more expensive than what they used to pay on Fargate. Hosting on Fargate and looking for more performant infra? Redeem the Porter startup deal here: https://lnkd.in/eJahycr5
Porter’s Post
More Relevant Posts
-
OpenAI has just killed a zillion startups 😍🤬 No way, no-one wants vendor lock-in, the future is self-hosted, open source everything (deployed on AWS/GCP/Azure, but let’s omit that) 🤬😍 As usual, probably the truth lies in the middle. For sure, building a small wrapper around OpenAI APIs will not give any long-term advantage and the next release might kill you. But specific problems require specific solutions, and OpenAI won’t solve all the world problems alone, like any other software vendor. So there is plenty of space for other players. But the true winners, imo, won’t be only OpenAI or some of their competitors. True winners will be those companies that will succed changing the way we live our daily lives, like FAANG did in the beginning of this century. And odds are they will LEVERAGE the AI platforms rather than building their own. No one knows what that will be. Maybe someone is already building it. The future will speak. I’ll elaborate my thoughts further in the coming days, as they are still quite blurred. #openai #devday #businessmodels
To view or add a comment, sign in
-
GENAI FIELD SOLUTION ARCHITECT @ Google | Startup Ecosystem & VC Expert with Digital Natives Insight
It's a wrap! I had an incredible time at TechCrunch #disrupt2023! It was a pleasure to hear about the innovative ideas that startups are working on and to meet many amazing people, including Thomas Kurian (TK). It is inspiring to see so many startups asking in-depth technical questions about Google Cloud. TK made a great point in his talk. We are entering a phase where the products we use today are no longer replacements for mechanical counterparts (Typewriter -> Documents). With "Duet" AI, we are now in a phase where AI is an author who helps us write, rather than automating our keys. With that in mind, let's get back to learning! Thanks Google for Startups team, especially Ryan Kiskis, Ishita Matharu , Brenda Wood, Madison Jenkins & Blake McCammon for having me!
To view or add a comment, sign in
-
-
Getting to play with bleeding-edge tech 💻 is honestly the coolest part of being a technical founder. The pace of AI innovation? It's wild, but I wouldn't trade the constant learning for anything. Google's LLM day in Seattle was a blast. We got hands-on with new AI architectures, learned how surprisingly smooth it is to productionize AI with Google Cloud (which, huge deal for smaller teams like ours), and wait for it... 'jailbreaking' LLMs 😲 . (shout out to Vinesh Prasanna Manoharan for that awesome workshop on jailbreaking LLMs!) Turns out, with the right approach, there's way more flexibility than I realized. Biggest product takeaway => It's easy to get caught up in the sheer awesomeness of AI, but the real magic is finding those pinpointed use cases where it'll meaningfully transform your product. Biggest technical takeaway => Integration with Retrieval Services: Google Cloud offers built-in integration with retrieval solutions (e.g., document stores, vector search) that are essential for LLM's RAG's (Retrieval augmented generation) performance and development speed. This simplifies the pipeline from data source to model, meaning faster speed to market A.I solutions for startups, like mine. That's the kind of transformative insights we're constantly after at my startup, Adauris. #googleai #LLM #founderlife #startups #audio #googlecloud #adauris
To view or add a comment, sign in
-
-
💭 Let us tell you a familiar story, one we have often heard before: a story of unstoppable growth, of success that seems to blossom from one day to the next. But if we dig a little deeper, we discover that behind that sudden triumph there is often a long path of tireless work, sometimes spanning decades. Well, our story is following the same pattern. Ever since we announced our commitment to creating a solution for GPT Apps requests have been pouring in and this is only the beginning. It's clear that Nuvolaris is the ideal choice for several reasons: • GPT Apps are based on REST APIs, and Nuvolaris was specifically designed for this type of use case. • It allows the easy development of REST APIs, making the process much simpler than creating a custom server. • When the number of REST APIs to be developed is large, the practicality of Nuvolaris becomes even more apparent. • Its structure is inherently SCALABLE, guaranteeing an automatic and facilitated implementation. • If you are not a SCALABLE follower, the solution becomes a little more complicated, but clearly it's a standard worth embracing! Imagine launching a successful GPT App, with a hundred thousand visitors using it. Well, now imagine that you find yourself holding the bag because behind there is a micro-server on Amazon that can't support more than three users at the same time, an application constrained by a single database and impossible to scale. The applications we're talking about are on another level: they are super scalable and, in this context, simply necessary and unavoidable. We are not talking about projects that "maybe one day" will end up on Kubernetes. No, we are talking about something that you have to implement on Kubernetes TODAY and that you have to know how to handle perfectly! That's why we wish everyone to tame this monster! ☁️ Or, alternatively, you can opt for Nuvolaris, a simple and intuitive solution. It's the fruit of at least six years of work, of a start-up built around this idea. It is not something that arrived 'just like that', but rather an answer to a real problem that, when it became urgent, we were here ready to solve! #nuvolaris #GPT
To view or add a comment, sign in
-
I help entrepreneurs with implementing AI solutions for business growth. Data Scientist | ML engineer | Generative AI | Brain Machine Interfaces | Computational Neuroscience
Tech Triumphs: Securing Success on the Business and ML Fronts with Cutting-Edge Implementations and Strategic Networking! These past weeks have been productive as hell. In talks with a major client. Got an invite from them.🙌🏽. Quite a few people are showing interest in our tech. Network expanded, got some key figures in my network. Happy for my business partners you know who you are. And that’s just on the business side. On the tech side, busy with MLOps or more specifically LLMOps, implemented monitoring for our LLM using Langkit and WhyLabs able to monitor for toxicity, data drift, bias, jailbreaks, prompt injections and a whole more. security is important to us. I’ve successfully finetuned Zephyr-7B using QLora and its showing great results. Implemented CI/CD for automatic build and cloud deployment, and this is just on the ML side. My cofounder and CTO is doing great work on the application side, our website is up and running, web scraper build, node server build, front end currently building. These are exiting times. #techstartups #entrepreneurship #ai #mlops #mlengineer #llms #llmops #llmsecurity #businessai
To view or add a comment, sign in
-
This is also what we are seeing at XMachina money raised being spent immediately to AWS/Azure/GS. At Nexa we built our own RAM cloud to do at scale compute and saved millions. Investing in a senior dev/op or cloud architecture specialist is mandatory for me now. Or else you are just burning your investors money irresponsibly. #ai #startups #compute
During the mid 2010s, I made a supposition that most unprofitable VC backed companies were spending $.40 of every $1 raised on FB and Google ads and AWS compute. It turned out to be largely right. Unfortunately, we are back to this same cycle in AI with NVDA, but I worry that it’s now $0.60-75 of every $1. Every entrepreneur should be aggressively trying to find a way to shrink their compute OPEX and use a managed service that delivers production quality speed. Otherwise this is all just sandboxing, toy apps and means little of anything. Boards should be asking for OPEX breakdowns and pushing their companies to look at every possible way to spend as little as possible for compute.
To view or add a comment, sign in
-
Managing the operating expenses (Opex) of building and deploying GenAI applications is becoming challenging for any organization. However, this obstacle can be easily overcome with the right strategies in place. Don't let the cost hinder your progress toward success. Take timely actions and implement effective cost-management techniques to ensure your organization stays ahead. #llmops #finops #genai
During the mid 2010s, I made a supposition that most unprofitable VC backed companies were spending $.40 of every $1 raised on FB and Google ads and AWS compute. It turned out to be largely right. Unfortunately, we are back to this same cycle in AI with NVDA, but I worry that it’s now $0.60-75 of every $1. Every entrepreneur should be aggressively trying to find a way to shrink their compute OPEX and use a managed service that delivers production quality speed. Otherwise this is all just sandboxing, toy apps and means little of anything. Boards should be asking for OPEX breakdowns and pushing their companies to look at every possible way to spend as little as possible for compute.
To view or add a comment, sign in
-
Strategist I Investor I Product Manager I Leveraging Emerging Technologies to Accelerate Growth and Drive Transformative Innovation I GenerativeAI, Hybrid Cloud, Digital Transformation, M&A, Venture Capital I
Absolutely agree, Chamath Palihapitiya. The temptation to overspend on advertising and compute resources can be alluring, especially in the fast-paced world of tech startups. However, as you rightly pointed out, the true value lies in innovative product development and efficient resource allocation. In today's AI landscape, where compute costs can quickly spiral out of control, it's crucial for entrepreneurs to prioritize optimizing their OPEX. Finding ways to shrink compute expenses while maintaining production quality is key to sustainable growth and long-term success. I believe that fostering a culture of resourcefulness and creativity within startups is paramount. Instead of blindly following trends or chasing the hype curve, entrepreneurs should focus on solving real challenges with innovative solutions. Boards indeed play a crucial role in steering companies towards responsible spending practices. By requesting OPEX breakdowns and encouraging frugality when it comes to compute expenses, they can help ensure that resources are allocated wisely and effectively. Ultimately, it's about striking the right balance between innovation and fiscal responsibility. Only then can startups truly make a meaningful impact and drive true value.
During the mid 2010s, I made a supposition that most unprofitable VC backed companies were spending $.40 of every $1 raised on FB and Google ads and AWS compute. It turned out to be largely right. Unfortunately, we are back to this same cycle in AI with NVDA, but I worry that it’s now $0.60-75 of every $1. Every entrepreneur should be aggressively trying to find a way to shrink their compute OPEX and use a managed service that delivers production quality speed. Otherwise this is all just sandboxing, toy apps and means little of anything. Boards should be asking for OPEX breakdowns and pushing their companies to look at every possible way to spend as little as possible for compute.
To view or add a comment, sign in
-
Growth Marketing Strategist with Expertise in Customer Experience Optimisation | Branding & Marketing Communications | Creative thinker
🎉 Good News for LegaMart 📍This is another positive step towards our mission of helping businesses grow globally and legal careers to thrive; Highlighting the strength of our vision in Legmart and the great potential for growth. #legal #aws #accelerator #growthmarketing
🚀 Exciting News! 🚀 Thrilled to announce that LegaMart has been selected to join the prestigious #AWSBuildAccelerator! 🌐✨ This incredible opportunity allows us to embark on a 10-week journey with AWS Startups to craft and launch our MVP. The AWS Build Accelerator is a game-changer for founders, guiding us through strategic decisions in product development. Over the next 10 weeks, we'll delve into the best practices of the AWS tech stack, covering analytics, data storage, AI/ML, and more. Join us on this exciting journey by following LegaMart for updates on our transformative experience with AWS! 🚀👥 #LegaMart #AWSBuildAccelerator #legaltech #LawTech #legalinnovation #awscloud
To view or add a comment, sign in
-
-
🚀 We started with DevOps, moved on to MLOps and who could have believed that developers are already preparing the groundwork for LLMOps! . . . 💡 Weights and Biases, a San Francisco-based startup, just raised $50 million in funding for its generative AI and LLMops efforts. 💡 This campaign has been led by Daniel Gross and former GitHub CEO Nat Friedman, and saw the participation of top investors like Sapphire Ventures, Coatue, and more. 💡 With this investment, the company is now valued at a whopping $1.25 billion! 💡 The company is already making waves with their developer tools, focusing on LLMops—operations for effectively utilizing and scaling large language models (LLMs). With their recent LLMops tools rollout, including W&B Prompts, they're empowering organizations to build and manage better prompts for LLMs. 💡Lukas Biewald, CEO and co-founder, shares his perspective: "Short term, there's enthusiasm for LLMs, but product integration will take time. I'm optimistic for the long term." Stay in the loop on AI innovations—follow me for more insightful content. Let's drive the future of AI together! 🤖 #artificialintelligence #largelanguagemodels #tech #aiinvestment #theSpiritOfAI
To view or add a comment, sign in
-
Founder at CreatorML (YC W23) | CreatorML is building predictive AI for video virality.
3moPorter is an amazing service! Saved me 100s of hours of DevOps. Thanks for being so responsive to all my team's questions in Slack.