David Carr
San Francisco, California, United States
1K followers
500+ connections
View mutual connections with David
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View mutual connections with David
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Experience
View David’s full profile
Other similar profiles
-
Divye Kapoor
Menlo Park, CAConnect -
Nhu Trinh
San Jose, CAConnect -
Brennen Barney
Director of Engineering at Domo, Inc.
Salt Lake City Metropolitan AreaConnect -
Piyush Kapoor
Mountain View, CAConnect -
Zheng Z.
San Francisco Bay AreaConnect -
Pararth Shah
San Francisco Bay AreaConnect -
Xueqiao (Joe) Xu
San Francisco Bay AreaConnect -
Atanu Ghosh
Fremont, CAConnect -
Leslie L.
Los Angeles Metropolitan AreaConnect -
Aleksander Bello
San Francisco Bay AreaConnect -
Vinay Rayini
Austin, TXConnect -
Kun Ji
Bellevue, WAConnect -
Shuo LI
Greater Seattle AreaConnect -
Xinyuan Gui
San Francisco, CAConnect -
Wanheng Li
San Francisco Bay AreaConnect -
Karan Kurbur
San Francisco Bay AreaConnect -
Hanqing Liu
Mountain View, CAConnect -
Aaron(Ruixin) Li
Redmond, WAConnect -
Amitesh Srivastava
San Francisco Bay AreaConnect -
Jonathan Ma
New York, NYConnect
Explore more posts
-
Jordan Mazer
What's going to happen to NVIDIA now that they've reached 3tn market cap? For the current employees and long timers... I've read stories of mid-level engineers who reportedly earned more than $60M via stock appreciation. Imagine the numbers for the more senior folks. How will NVIDIA keep folks around who are post-economic? And for all the new folks... The value proposition is complex. The company is the most valuable (by market cap) in the world. If they are heavily compensated via RSUs, the assumption is that there is significant remaining upside to be gained. It's incredibly difficult to continue orders of magnitude stock appreciation after reaching the absolute zenith... NVIDIA is an obvious tech darling - it looks and feels like a version of Google in the mid to late 2000's - but I'm really interested to see if the immense financial success of the stock will end up driving out the core talent group (eg, the long timers who are now mega rich) while also limiting access to new upstart talent who don't see a path to outsize financial success. Curious what others think!!
36
10 Comments -
Ian Gillespie
The newest issue of The Pragmatic Engineer found a surprise uptick in eng hiring in the typically slow summer season - I'm curious to see how the macro environment impacts this uptick as we head into the fall, but I remain hopeful. I asked Gemini to give me the tldr of the article to share with my team (sorry, Gergely Orosz) and found it missed one of the most interesting pieces of the article. Gemini's highlights - Unexpected surge: Tech recruitment has seen an unusual spike in June and July, contrary to typical trends. Increased demand: Startups, especially AI-focused ones, are driving the hiring uptick. Global phenomenon: This trend isn't just limited to the US; Europe is also experiencing increased recruitment activity. Reasons for hiring: Companies are backfilling positions, investing in growth due to improved financials, and responding to increased VC funding. Challenges remain: While the job market is improving, finding qualified candidates and offering competitive compensation remains difficult for many companies. What I found most interesting - The nature of the backfills: the article finds that a lot of new headcount are actually backfills of folks that were "over-slashed" during rounds of layoffs. Not entirely surprising with the "trim the fat" and "rebuild lean" mentality, but interesting nonetheless. Either way, the uptick is encouraging (and don't always rely on Gemini's tldrs). #hiring #startups #VCbacked #softwareengineering
11
1 Comment -
Gary Lerhaupt
Who's got a Waymo experience to share? I just had my first truly bad one. First, I should say I love Waymo. It's not in many places yet, but it's poised to be everywhere soon. The app tells me I've done 26 rides covering 97 miles in SF and mostly it's been an exceptional experience. Apart from some quibbles like getting dropped off a block away up a big hill, it's a zen-like experience and a smooth better-than-human-driver experience on the way to wherever I'm going. But I think we're all bound to experience some hiccups as we journey to the future. In my case, I was taking a trip into the Financial District for an event and I had 6 minutes to spare given the ETA. Then all of the sudden it decides it wants to pull into the left lane, which is filled with parked cars. I sit at first amused at this decision. It pulls up right behind the parked car and waits. Then it goes through a dance of deciding it wants to get back in the original lane (everybody loves to snub a driverless car in traffic!) and pulling ever closer to the parked car ahead. Miraculously the driver of the parked car shows up, loads his trunk and departs with ease. The Waymo decides the next course of action is to pull directly behind the next parked car in the lane. My amusement fades. I call support via the touchscreen. My 6 minute buffer that had evaporated to 1 minute suddenly tells me I'll be 13 minutes late, a driver assistance team is dispatched and on their way. Yikes! Then magically (I think they manually took control), the Waymo behaves like a normal driver and inches back into the normal lane and I'm on my way, arriving 1 minute late to the meeting. So in the end, a bit of time stress for someone who really likes to be on time (I started a time company after all) and not really world-ending. In fact, despite this unfortunate experience, I'll be loyally riding again. And so I guess this ends as a parable about what Product Market Fit looks like. If you can mostly deliver superior experiences, there's room for rough edges. (pictured below, mom's first trip in a Waymo)
28
4 Comments -
Nathan Nannenga
Want to tease something my team has been working on over at Buffer! The attached image isn't a mockup, but a screenshot of a live prototype of RSS feeds directly in Buffer! It's only available internally and still pretty rough at this point, but we'll have it available for user testing soon. With this enhancement, we hope to help Buffer users easily find new and engaging content to share with their audiences. What do you think? Will RSS feeds in Buffer help you create more content?
19
1 Comment -
Scott Persinger
🍜Anybody up for some ramen? I am excited to share that I am back in the garage, building again (it’s actually a house in the Berkeley Hills). We are taking the wraps off of Supercog AI, a new startup focused on solving Application Integration using LLM powered agents. My co-founder in this new venture is a great friend of mine (and a former co-founder partner), Alex Osborne. Alex and I have worked around the application integration space for a long time, and we are unrealistically excited about applying the GenAI stack to this problem. Not many people relish the task to get some bits to move from system A to system B. But it’s a critical job to be done, and that’s led to a lot of brittle scripts, a myriad of inflexible “no code” tools, and a lot of hand crafted SQL. But the power of the Large Language Model, trained on a huge corpus of information, offers a remarkable tool to solve this problem. The LLM you use today already knows the APIs of hundreds of popular systems. It knows the SQL dialects of every major database. It understands the semantics and data schemas of many popular SaaS systems. It understands much of the specific domain in which your business operates. I like to call the AI revolution “the last platform shift”. That may turn out to be hyperbole, but I definitely believe we are in the very early stages of seeing what this new stack can enable. I know we are hardly the first folks to claim that “amazing stuff is coming!”. But this is why Alex and I have decided to focus on a real and hard problem. If we can prove that LLMs can power a new way to solve this problem, it will be the first successful *new* approach in 15 years.. We will have much more to share about this new platform soon. In the meantime, if you’re interested in getting a peek - or even better if you have application or data integration tasks that you would like some help with - please reach out because we want to help. #genai #startups #backinthegarage
164
37 Comments -
Anton Zaides
What does it feel like to be an engineering manager at Amazon: Since the RTO announcement it became popular to write shit about Amazon, but it's still one of the best companies around the world. A few weeks ago, Gilad Naor shared with the 'Leading Developers' readers 3 lessons from his time as an engineering manager at Amazon. Here is the first one, read the full article here: https://lnkd.in/dHBxv2Ha 𝟭. 𝗦𝗼𝗹𝘃𝗲 𝗽𝗿𝗼𝗯𝗹𝗲𝗺𝘀 𝘄𝗶𝘁𝗵 𝗺𝗲𝗰𝗵𝗮𝗻𝗶𝘀𝗺𝘀 My first meeting at Amazon was baffling. I entered a room with about a dozen Software Development Managers (SDMs) and Product Managers (PMs). Everyone sat in silence and looked at their laptops or the printed papers they had before them. It was my second day at Amazon, and I picked up one of the extra sheets and started reading. After those first ten awkward minutes, everyone finished reading and the meeting began. Why do Amazon meetings always start this way? This is a specific example of how Amazon approaches problem-solving. It is the "Good Intentions Don't Matter" approach. In a famous internal video, Jeff Bezos talks about why good intentions are irrelevant. When things don't work, it does not matter what people wanted. All that matters is fixing the problems with mechanisms. For example, many busy managers come to meetings without first reading the prep material. They all intended to read the 6-Pager before the meeting. They just got sidetracked by the latest emergency. Scolding the managers won't solve the problem. A mechanism will. This is why all Amazon meetings start with a study period. The remaining meeting time is then spent on high-quality decision-making. Meta solves the same problem by creating cultural norms. I can't count the number of times that I heard a senior leader cut off the presenter: "Stop, assume that everyone here already read the slides. Let's get to the point." This is a culture-first approach. I have seen this work well in some teams, and not work at all in other teams. At Amazon, the solution is part of the company's "operating system". ------------------------- Anton here - by now everyone is familiar with the 6-pager concept, but that "operating system" thinking can be used almost anywhere. Think about something annoying that your developers do, and how can you solve it. For example, I remember being annoyed that people don’t update the Jira tickets. Finally, after requesting multiple times, now we just take 20 seconds from the standup to update them.
56
8 Comments -
Eugene Shih
"Your main job is to make your product win." I think this should apply to any product that your company makes. I also like what he says following this statement about what is NOT your main job. I will summarize here: "Your main job is not to make your manager happy, it is not to make execs happy." Also, note, this main job doesn't only apply to PMs, but also to senior Engineering Managers, Tech Leads, and Designers. #makeyourproductwin #leadership
5
2 Comments -
Giuseppe M.
On April 18th, just last week, Meta unveiled Llama 3, lauded as "the most capable openly available LLM to date". This remarkable achievement by the Meta team comes hot on the heels of Llama 2's release last July. Notably, the new models surpass their predecessors and competing offerings from other providers in terms of performance. Meta's commitment to open sourcing these powerful models while prioritizing model safety and responsible usage is commendable. (https://lnkd.in/ghv3RJRz) Yesterday, Microsoft introduced Phi-3 Mini, a member of the Phi-3 family of models, promoted as "the most capable and cost-effective small language models available". According to Microsoft, the key to achieve high performance in such a small package lies in the quality of the training data. The model has been open sourced and is now available on Ollama and Hugging Face. (https://lnkd.in/gKgSmrBn) These compact yet powerful models serve as invaluable tools for creators eager to bring their visions to life or simply experiment with cutting-edge technology, particularly for those with limited access to resources. Shortly after the release of Llama 3, reports began surfacing of enthusiasts running the 8B model on a Raspberry Pi 5 equipped with just 8GB of RAM—a modest $80 single-board computer. (https://lnkd.in/gbeQPhQ2) While the performance understandably reflects the limitations of such a platform, the fact that the model runs and generates an output is nothing short of magical. Similarly, the Phi-3 Mini model can run on ubiquitous devices such as smartphones. Reflecting on my own journey, I recall the pivotal role that early access to a low-cost computer played in shaping my passion for coding. The gift of an 8-bit Commodore 64 home computer ignited my curiosity at a young age. Later, with the guidance of an outstanding Computer Science teaching staff at my high school IISS Marconi-Hack Bari, I embarked on ambitious projects like a networked multiplayer version of Battleship written in Turbo Pascal and 8086 Assembly, complete with custom sprites and lo-res graphics. These formative experiences paved the way for my career, eventually leading me to various engineering and leadership roles. It all began with a simple, low-cost, modestly capable computer used for experimentation—a gateway to discovering my life's passion. I encourage educators and parents alike to consider setting up similar low-cost experimentation environments. By introducing younger generations to open-source AI technologies, we can nurture their creativity and help them uncover their true passions. It's not just a weekend project—it's an investment in their future and the future of innovation and discovery. #education #youth #AI #LLM #future #innovation
24
-
Chris Sanders
Llama3 is out. It's the new open weights model from Meta, and you can try it out for free at https://www.meta.ai/. Ollama had it ready to go within hours (https://lnkd.in/eJks-Vg2). If you grabbed a copy from Ollama within the first ~5 hours, pull again; they fixed an incorrect end token issue. In my last post, I suggested that stiff competition between models would lead to competing on price. Meta.ai is now free, even without a Facebook login. Initial claims and benchmarks put Llama3 on par with private models in many areas – a significant development. The proprietary vs. open model gap is closing, thanks to Meta's commitment to top-tier open weight models. I can't imagine where the open LLM community would be without Meta's financial commitment. I tried Llama-3-8B and a low-quant Llama-3-40B briefly last night. While I haven't formed a solid opinion yet, the benchmarks claim Llama-3-8B outperforms Llama-2-40B. If that holds true, it's a significant win, as it's not just about throwing more compute at the problem – making models that can run on consumer hardware better is a big deal. While a 40B model is currently out of reach for most consumers, it's easy to imagine that it will become increasingly accessible as new video cards with higher memory become the norm. I managed to run Llama-3-40B on a 2quant on a 3090 without running out of RAM. I don't have high hopes for great results, but after some tuning, models based on Llama-3-40B could make for a very reasonable private assistant on modest hardware. If you've tried Llama-3 let me know your experience so far. Also if you have any tips for the best options for a 3090 let me know.
2
-
Sumeet Maniar
A colleague referred me to this. Great use case in how using humans to perfect, train and scale an AI system. The difference between self driving mode of Tesla vs. Waymo. Waymo is way ahead because of using humans to train the models faster for edge cases. In driving, as we now there are too many edge cases. Similarly, a colleagued complained to me why Speechify does not work well to read legal briefs (for example the solution reads outs headers and footers along with text, it does not disregard table of contents). I told her the solution will become better. Right now we are in first or second grade. Key lesson: solutions will get better and faster for now with humans in the loop.
4
-
Titus Winters
I just realized: we should start saying "build memoization" rather than "build caching." We aren't really talking about least-recently-used or time-to-live algorithms, we're talking about "don't bother recompiling this piece if none of its inputs changed." (It has been puzzling me that some people are so deeply fearful of "build cache" as a concept, and honestly I think it's mostly because most build systems have non-hermetic state scattered all over the place. Maybe a slightly more precise term for the type of caching we're concerned with would clarify the semantics in question.)
83
9 Comments -
Neal Ghosh
🚨🚨 🚨 Unlocking Innovation: The Power of the Adjacent Possible Innovation programs can stall when the team tries to ideate way beyond their collective perspective and frame of reference. Like a software engineer brainstorming genetic modification of mycelia, or a senior executive ideating process improvements which they haven't been participated in for decades. It's challenging, exhausting, and ultimately self-defeating. Enter the concept of the "Adjacent Possible" - a game-changer for innovation. What is it? The Adjacent Possible, coined by Stuart Kauffman, represents the realm of new ideas within reach based on our current knowledge, resources, and capabilities. Why it matters: ➡️ Realistic Innovation: it focuses on achievable advancements, not far-fetched concepts. ➡️ Incremental Progress: each innovation opens doors to new possibilities. How to leverage it: • Constantly explore the boundaries of your adjacent possible. What's in your line of sight is what you know best. • Combine existing ideas in novel ways. Even small perturbations from status quo can generate large innovation leaps. • Stay curious about developments in adjacent industries. Cross-domain patterns create fresh sources of ideas, while also expanding the adjacent possible. Most of all, it takes the 'genius' pressure off the team. No one knows everything, but each one of us knows a lot about something, and we can all drive plenty of innovation potential by focusing on the spaces we know best. Are you exploring your Adjacent Possible? Share your thoughts below! 👇 #Innovation #innovationsystems #AdjacentPossible
8
2 Comments -
Alexandru Bucur
This is a pretty spot on post by Addy Osmani but I'd like to add one more thing. https://lnkd.in/eu-Nrr3v My hot take is that would be interesting to discuss is the pressure on senior+ developers doing code reviews (or developers in general). LLM's are pretty great, especially if you want to boilerplate a lot of crap, or write tests, but they can introduce subtle logic bugs. In theory this should be covered by tests, but then again, the same LLM is going to help write said tests.
9
2 Comments -
Pedro Escudero Zumel
Curious and slightly unsettling experience with chatGPT during a novel review processl (it has been a long time since the publication of my last fiction book!). I uploaded 4 chapters using the premium version and requested summaries of the main characters' names, surnames, and descriptions. One character intentionally had no surname, but chatGPT provided one. After pointing out the error, chatGPT apologized, acknowledged the mistake, and offered a different surname. When I inquired about the repeated incorrect surnames, chatGPT explained its reasoning: Certainly! The reason for the earlier errors with the characters' surnames is a mix of factors: Initial confusion in searching the document: When I tried to retrieve the names from the original document, the search didn’t extract the correct information, leading me to create plausible but incorrect surnames instead of relying on the actual text. Autogenerated names: In cases where I couldn’t locate the correct surname, I defaulted to generating surnames that seemed fitting for the context or the character’s first name. This was a mistake, as I should have used the accurate details from the text. Imprecise searches: My early attempts at searching the document didn’t yield the right results, so I made assumptions instead of double-checking the content properly. #chatGPT #AI
22
2 Comments -
Kellan Elliott-McCrea
While there are certainly individual variations in skill and experience when discussing modern software development the story is much more nuanced than pure individual effort. * software development is a team sport -- individual performance is heavily impacted by the system in which they are asked to operate in. * we pair skill and experience to this model of understanding performance by expecting people to be able to influence and adapt in direct proportion to their seniority, aka the more senior you are the more responsible you are for the outcomes of your work, the less senior you are the more your manager and management chain are responsible. * humans want to succeed, we're social animals who derive purpose from demonstrating skill, and working collectively. All of which is to say there is no way you have over 1000 people underperforming at your company without senior leadership deeply implicated in. If nearly 10% of your company is failing, versus just having the market move against you, you need to fire the CEO.
150
9 Comments -
Vaibhav Puranik
Anthropic's latest Claude AI model is taking human-AI interaction to a whole new level! 🖥️🤖 The new Claude 3.5 Sonnet can now use computers just like a human would - by looking at the screen, moving the cursor, clicking buttons, and typing text. This is a major leap forward in AI capabilities. Key features: Operates based on screenshots, emulating human computer use Can perform complex multi-step tasks across different applications Self-corrects and retries when encountering obstacles2 Accessible via API for developers to build with Why it's game-changing: This opens up incredible possibilities for AI assistance in data entry, form filling, software testing, and automating repetitive computer tasks13. Imagine having an AI assistant that can seamlessly work alongside you, handling tedious computer work while you focus on higher-level tasks. While still in beta, this technology demonstrates how AI is rapidly advancing toward more human-like interaction with digital interfaces. As it develops further, it could revolutionize how we work with computers and boost productivity across industries. The future of human-AI collaboration just got a lot more exciting! What do you think about this new capability? #AIInnovation #FutureOfWork Here is a video about it: https://lnkd.in/gh8aNYEu
6
-
Anup Gosavi
Ever wonder how teams at Airbnb & directors like Spielberg turn big ideas into unforgettable experiences? ✨ It all starts with storyboarding - it’s like sketching out your project with a comic strip to visualize each step before you begin. We’re launching a "Multimodal Storyboarding Starter Kit" for developers, enabling you to unlock GenAI storyboarding in your apps. Check out StoryFlow, a demo app built on VideoDB: 1. Enter your app description. 2. List user steps. 3. Hit Generate & watch your user journey come to life! Want early access? Reply or like this post! :)
20
1 Comment -
Tim Wilde
Google had nine of my fellow Googlers *arrested* last night. Their "crime"? Peacefully protesting Google's Project Nimbus contract with the Israeli government, and recent revelations that, despite repeated explicit claims by Google to the contrary, this contract includes providing discounted cloud computing and consulting services to the Israeli Defense Force (IDF). You can agree or not with the overall message and goals of #NoTechForApartheid. You can debate whether it's appropriate to call what's happening in #Gaza a #genocide. But you can't deny that this was an act of pure #retaliation against these Googlers. Google claims that the removal of these Googlers was a matter of "workplace safety". Whose safety, exactly, was being compromised by nine peaceful employees inside the workspace of their own employer? You can say they should have used official channels - they did! We have! Hundreds of Googlers have spoken out about Project Nimbus internally through all manner of official channels. We have been consistently ignored, lied to, and given nonsense non-responses. Google culture explicitly encourages escalation - what is the next step when you've been ignored through all official channels but to make yourselves impossible to ignore? We do not consent to the output of our labor being used, directly and indirectly, to fight an unjust war, kill innocent civilians, and commit war crimes. We will not be silenced through illegal retaliation for speaking up about the conditions of our workplace. We will not be used. What can one person do? Use your voice. Speak up. And join with others. In collective action, we have power. Join the Alphabet Workers Union (AWU). Sign the petitions to end Project Nimbus, restore Eddie Hatfield's job, and more (internally or the external petition on notechforapartheid.com ). If you live in the US, contact your representatives in government and tell them that you want to stop funding this unjust war, and that you want clear violations of labor law such as Google's punished, and worker protections enforced and improved. Alone, it can seem like it's impossible. But together, we can make change. #Solidarity #Labor #Union #AWU
215
10 Comments -
Brad Ledbetter
I was reading one of the 2 newsletters I follow - The Grappler's Diary on Substack - and the author (Chris Wojcik) pointed out one of those "well, yeah, of course" kind of ideas that made me stop and think for a bit. Wojcik pointed out that it takes more and more effort to reach higher levels of skill in jiu jitsu. This is of course true of any skill. I've certainly seen that with programming. The easiest and biggest leaps are at the beginning. Going from zero to your first working program is pretty easy. Becoming a true professional developer takes years. Continuous improvement takes introspection, feedback, seeking out training and education, and lots and lots of work. And the noticeable improvements these days are small: cleaner code, improved readability, and better organization. Little things that matter. And the more time I spend as a "programmer" the less I write code and the more time I spend on related things: technical design, code review, stakeholder relations, project management, and more. Just as you get more skilled in martial arts, the more time you spend teaching, coaching, and advising, and the less time you spend getting beat up by kids.
7
5 Comments
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore MoreOthers named David Carr in United States
-
David Carr
Lake Forest, IL -
David Carr
Bentonville, AR -
David Carr
Austin, Texas Metropolitan Area -
David Carr
Vice President - Investor Relations at Churchill Stateside Securities
Carlsbad, CA
1053 others named David Carr in United States are on LinkedIn
See others named David Carr