I can help you break things down and formulate a clear, tech-infused, step-by-step approach for achieving your goals over the coming days, weeks or even years.
Greetings! I’m Cederik. A Venture Catalyst for your Better Future.
I make big ideas come to life across various ventures, all driven by a common purpose – shaping a future that benefits as many people as possible.
Whether it’s wearing the hat of a Co-Founder, Startup Advisor, Managing Director, Strategist, or someone who enjoys hacking things together, I bring my frameworks and hands-on experience to breathe life into ventures, make smart investments, and offer my time to make things happen.
In the world of boardrooms, strategy sessions, and the catacombs of code, I do both steering and building to ensure ventures deliver success, bringing ideas to life. Whether you’re after strategic advice or someone to roll up their sleeves, let’s chat.
Together, we’ll make a positive impact, and hey, we’ll try to keep those peanut butter sandwiches to a minimum along the way!
Beyond Pattern Matching: The Next Frontiers of AI The recent explosion in AI capabilities has largely been driven by one key insight: with enough data and computing power, pattern matching can produce remarkably human-like behaviors. Large Language Models like GPT-4 and Claude have demonstrated that by training on vast amounts of text, AI can engage in sophisticated conversations, write code, and even show glimpses of reasoning.
But is pattern matching all there is to intelligence?
Learning From Game Dev: Optimizing LLM Performance Using Game Engine Principles Game engines have been optimizing real-time performance for decades. As LLM developers face similar challenges with latency and resource management, there’s much to learn from game development practices. Let’s explore how game engine architecture principles can improve LLM inference systems.
Memory Management Techniques Game developers have mastered efficient memory management through techniques like:
Object pooling: Pre-allocating commonly used objects Resource streaming: Loading assets dynamically as needed Memory budgeting: Strict allocation limits for different subsystems These same principles can be applied to LLM inference:
Unlocking the Power of GPUs: How LLM Inference Mirrors Video Game Rendering As a video game developer, you’re probably already familiar with the incredible power of GPUs when it comes to real-time rendering and pushing polygons to the screen. What you might not know is that this same technology is revolutionizing AI, particularly in how Large Language Models (LLMs) process data. In fact, LLM inference on GPUs shares many similarities with video game rendering, both leveraging the strengths of parallel computing.
The Road to Autonomous Humanoid Robots: Assisting Us in Daily Life Humanoid robots have long captured the imagination, from science fiction novels to futuristic films. Yet, the vision of robots assisting us in our homes, workplaces, and cities remains a dream for many. Thanks to advancements in AI, robotics, and machine learning, we are closer than ever to making autonomous humanoid robots a part of our daily lives.
But how close are we, really?
The Road to Humanoid Robots Assisting Us in Daily Life: Teleoperated vs. Autonomous The dream of humanoid robots assisting us in our homes, workplaces, and public spaces is steadily becoming a reality. However, there are two distinct paths that these robots will follow to reach us: teleoperated humanoid robots, which are controlled by humans remotely, and fully autonomous humanoid robots, which can operate independently.
Each type of robot is advancing at its own pace, with different levels of maturity and timelines for widespread deployment.