I started a local vibecoders group because I think it has the potential to help my community.

(What is vibecoding? It’s a new word, coined last month. See https://en.wikipedia.org/wiki/Vibe_coding)

Why might it be part of a solarpunk future? I often see and am inspired by solarpunk art that depicts relationships and family happiness set inside a beautiful blend of natural and technological wonder. A mom working on her hydroponic garden as the kids play. Friends chatting as they look at a green cityscape.

All of these visions have what I would call a 3-way harmony–harmony between humankind and itself, between humankind and nature, and between nature and technology.

But how is this harmony achieved? Do the “non-techies” live inside a hellscape of technology that other people have created? No! At least, I sure don’t believe in that vision. We need to be in control of our technology, able to craft it, change it, adjust it to our circumstances. Like gardening, but with technology.

I think vibecoding is a whisper of a beginning in this direction.

Right now, the capital requirements to build software are extremely high–imagine what Meta paid to have Instagram developed, for instance. It’s probably in the tens of millions or hundreds of millions of dollars. It’s likely that only corporations can afford to build this type of software–local communities are priced out.

But imagine if everyone could (vibe)code, at least to some degree. What if you could build just the habit-tracking app you need, in under an hour? What if you didn’t need to be an Open Source software wizard to mold an existing app into the app you actually want?

Having AI help us build software drops the capital requirements of software development from millions of dollars to thousands, maybe even hundreds. It’s possible (for me, at least) to imagine a future of participative software development–where the digital rules of our lives are our own, fashioned individually and collectively. Not necessarily by tech wizards and esoteric capitalists, but by all of us.

Vibecoding isn’t quite there yet–we aren’t quite to the Star Trek computer just yet. I don’t want to oversell it and promise the moon. But I think we’re at the beginning of a shift, and I look forward to exploring it.

P.S. If you want to try vibecoding out, I recommend v0 among all the tools I’ve played with. It has the most accurate results with the least pain and frustration for now. Hopefully we’ll see lots of alternatives and especially open source options crop up soon.

  • @houseofleft@slrpnk.net
    link
    fedilink
    English
    13 months ago

    I think the pretty universal answer in all these comments is “no”- I think that’s fair but I’d add sone caveats.

    There’s a lot of negative sentiments here around LLMs, which I agree with, but I think it’s easy to imagine some hypothetical future where LLMs existing without the current water/energy overuse, hallucinations or big companies stealing individuals work. Whether that future is likely or not, I think it’s possible.

    The main reason vibe coding isn’t solarpunk is that, taken by itself, it’s not in any way related to ecological stewardship, anti-capitalist community building, or anything else that’s core to solarpunk. Vibe coding might or might not be part of some “cool techy future” in the same way as flying cars, robots, and floating cities but that’s not a reason to consider it as solarpunk.

    If you’re into LLMs and solarpunk, instead of arguing that LLMs are solarpunk, you can make efforts to push them to being more solarpunk. How can LLMs support communities instead of coorporations? How can, through weights sharing and various optimisations, we make LLMs less damaging to the environment? Etc. That’d at least be a solarpunk way to go about LLMs, even if LLMs aren’t inherently solarpunk.

    • @canadaduane@lemmy.caOP
      link
      fedilink
      English
      11 month ago

      This is exactly what I’m trying to do, but I was taken aback at how negative the solarpunk community took things. I thought of myself as solarpunk, but I’ve had to reconsider since posting this.

      • @houseofleft@slrpnk.net
        link
        fedilink
        English
        11 month ago

        That’s sad to hear- people on the internet can seem harsh, I thinks its probably too easy to forget there’s a real person behind most questions.

        It’s been like a month now, and I still don’t really think LLMs are solarpunk, trying oto make them more.open and community based sounds worthwhile though, so good luck with it!

        Massive side point, but if you’re interested on “empowering people who don’t want to deal with technical details of coding” check out ideas as a whole around “end user programming”. It’s a pretty broad church, but there’s some cool stuff happening under that term that it sounds like you’d like.

    • @strongoose@slrpnk.net
      link
      fedilink
      English
      03 months ago

      I agree with your assessment, but I’m more pessimistic about LLMs as a technology. The Luddites tell us that machines are not value-neutral - we should ask who the LLMs serve.

      The core function of an LLM is to enclose public commons (aggregate, open-access human knowledge) in a centrally-controlled black box. It’s not a coincidence that corporations are trying to replace search with LLM summaries - the point is for the model to be an intermediary between the user and the information they need.

      Vibecoding embraces this intermediation - to the vibecoder, an understanding of the technology they’re building is simply a cost that must be surmounted, and if they can avoid paying it, so much the better. This is misguided. Knowledge is power, and we cede that power at our peril. Solarpunk is punk, and punk is DIY, and DIY means taking back ownership of spaces and technologies.

      I won’t say that it’s inherently wrong to cede that ownership - tactically. Perhaps the OP is building essential tools that their communities can’t access otherwise. But short term fixes a solarpunk future do not make.

      • @signaleleven@slrpnk.net
        link
        fedilink
        119 days ago

        I have all the same issues most of y’all have with the moral and environmental issues with giant corporate models but I take issue with this statement:

        The core function of an LLM is to enclose public commons (aggregate, open-access human knowledge) in a centrally-controlled black box.

        The core function of an LLM is to generate statically plausible text (which is what my totally open source mobile keyboard is doing as I type using a very small transformer based LLM, for instance)

        Using it to provide an answer to a search instead of returning sources is 100% the evil you described. But it is a shitty use for a technology that would be unfair to reflect entirely on the technlogy itself.

        LLMs are not going away. We might disagree on their usefulness (I flip flop daily on my opinion about it, which is usually a sign that something is inherently neutral) but zealot blanket rejections worry me a bit.

        The other knee-jerk reaction about energy (and water, but that is not unavoidable) usage is also something I try to process a bit compartimentalized. It needs to improve and the scale of growth is unsustainable. Does that invalidate everything currently explored or researched?

        The push for more efficiency is vital and rightful. Do more with less. But while it’s fair to criticize someone for using an incandescent light bulb instead of better technology to, say, illuminate a room, criticizing them for using light in that room is wrong, IMHO. We don’t need less light (well, yes, outdoors, but for different reasons), we need better technology and cleaner energy so we don’t need to worry about who is turning on which light. I get that “AI” is power hungry, and that needs to improve, but I am very uncomfortable with the idea that we should decide a priori if something is worth using energy or not. It’s… A bit draconian?

        I know its not a super original position (“a tool is just a tool”). I’m trying to work through this myself. As I type this I think of PoW blockchains as a counterexample that I would bring up to debate myself. Yes, it looks like there are usages that appear to be inherently “wrong”. Why do I find blockchain worse? Because I consider it unworthy of the energy spent for it, which makes me “guilty” of what I criticized…

        Damn, It’s hard to try to have opinions!


        More in topic: vibe coding (super icky name, jfc) might be vaguely OK for prototyping in some cases, or extremely limited cases where you can almost prove correctness. Or yeah, personal tools where you’re the only person to be responsible and affected by the results. Anything more than that, and it makes me nervous. It has not much to share with solarpunk per se. But AI aided development (maybe a broader and less silly named concept) is not antithetical to solarpunk, IMHO. The DIY nature you ( @strongoose@slrpnk.net) describe doesn’t go down at infinity. To build a community garden from scratch you first need to invent the universe. You not knowing how to invent the universe. You still own the technology if you use a tool you don’t fully understand the internals of. You need to retain the option to understand it though, I agree.

  • @keepthepace@slrpnk.net
    link
    fedilink
    1
    edit-2
    1 month ago

    Most of the solarpunk crowd seems to equate anything LLM with Sam Altman and Elon Musk. They think it is a purely capitalistic endeavor that can’t run on anything else than methane-breathing datacenters. There needs to be some education about the real impact of it and the open source of it. To explain how it can fit into a post-capitalist society.

    I do think that vibe-coding is one way to reappropriate tech yes, and is extremely solarpunk. It makes manipulating machines and designing system a far more inclusive capability, bringing it from the work of specialist into the political sphere.

    But explaining that is an uphill battle. When I made a post about solarpunk AI a year ago, it was well received. I fear it would be downvoted into oblivion if I published the same thing today.

  • noodle (he/him)
    link
    fedilink
    13 months ago

    using insecure code that a glorified autocorrect has spat out hopefully isn’t going to be a part of the future I’ll be living in.

    • @canadaduane@lemmy.caOP
      link
      fedilink
      English
      -13 months ago

      I might be misunderstanding, but it sounds like you’re angry at AI, or at least, you’d like it to diminish not grow in use.

  • @perestroika@slrpnk.net
    link
    fedilink
    1
    edit-2
    3 months ago

    The concept is new to me, so I’m a bit challenged to give an opinion. I will try however.

    In some systems, software can be isolated from the real world in a nice sandbox with no unexpected inputs. If a clear way of expressing what one really wants is available, and more convenient than a programming language, I believe a well-trained and self-critical AI (capable of estimating its probability of success at a task) will be highly qualified to write that kind of software, and tell when things are doubtful.

    The coder may not understand the code, though, which is something I find politically unacceptable. I don’t want a society where people don’t understand how their systems work.

    It could even contain a logic bomb and nobody would know. Even the AI which wrote it may tomorrow fail to understand it, after the software has become sufficiently unique through customization. So, there’s a risk that the software lacks even a single qualified maintainer.

    Meanwhile some software is mission critical - if it fails, something irreversible happens in the real world. This kind of software usually must be understood by several people. New people must be capable of coming to understand it through review. They must be able to predict its limitations, give specifications for each subsystem and build testing routines to detect introduction of errors.

    Mission critical software typically has a close relationship with hardware. It typically has sensors coming from the real world and effectors changing the real world. Testing it resembles doing electronical and physical experiments. The system may have undescribed properties that an AI cannot be informed about. It may be impossible to code successfully without actually doing those experiments, finding out the limitations and quirks of hardware, and thus it may be impossible for an AI to build from a prompt.

    I’m currently building a drone system and I’m up to my neck in undocumented hardware interactions, but even a heating controller will encounter some. I don’t think people will experience success in the near future with letting an AI build such systems for them. In principle it can. In principle, you can let an AI teach a robot dog to walk, and it will take only a few hours. But this will likely require giving it control of said robot dog, letting it run experiments and learn from outcomes. Which may take a week, while writing the code might have also taken a week. In the end, one code base will be maintainable, the other likely not.

  • @heavydust@sh.itjust.works
    link
    fedilink
    03 months ago

    (warning: I hate “vibe” coding for a lot of reasons, and even more what it represents)

    LLMs are the opposite of anything ecological IMHO.

    What if you could build just the habit-tracking app you need

    We have a thousand of those already. A better example is needed.

    mold an existing app

    That’s not how any of this works. One more reason to shun those who do not care and take the time to understand what programming is all about.

    the capital requirements of software development from millions of dollars

    Linux is free FFS, install Ubuntu today and you have all the languages you’ll ever need. How is code vomit vibe coding helping? Also LLMs are very expensive to run right now, it’s the worst example.

    Last but not least, I hate how all the CEOs, managers, companies, and random people try to: pretend that open-source does not exist, change the meaning of the word open-source by associating it with binary blobs, and show developers as selfish people (“tech wizards”) who want to keep the technology for themselves.

    You don’t want to learn how computer works and it’s fine, it’s your right, but don’t pretend it’s anyone’s fault.

    • @canadaduane@lemmy.caOP
      link
      fedilink
      English
      -13 months ago

      Thanks for your thoughtful reply. I admire that, despite the clear differences we might feel around the subject. I’ll try to be thoughtful as well.

      LLMs are the opposite of anything ecological IMHO.

      I think this is a really interesting point, and I hope to hear it unpacked some time. I’d be interested to know if you’re talking about American LLMs, or some other breed of LLMs, or the transformer algorithm that generates language models itself.

      We have a thousand of those already. A better example is needed.

      I mentioned this in another reply, but will repeat here a bit. I didn’t go into detail in the original post because I wanted to be brief. But the habit tracker app I was thinking about was something my daughter designed. She isn’t a coder. But she had a complex set of nuanced motivation ideas for herself–she wanted to make a system where if she didn’t something healthy for herself, she would be awarded stars, and if she did something social she would be awarded flowers. I’m doing her app a disservice by abbreviating it. She wrote a 19-page description (Product Requirements Doc, in engineering terms, but she wouldn’t know that term) in Google Docs, and then built her app in v0. She was so so excited to see her ideas come to life! It’s the first time I’ve ever seen her really interested in computers.

      (re: mold an existing app) That’s not how any of this works. One more reason to shun those who do not care and take the time to understand what programming is all about.

      I’m not sure what you mean here. I’m a FOSS developer. I know what open source is. I also know what it takes to start with an existing open source app and mold it into a new shape, based on new requirements that I have. What am I missing?

      Linux is free FFS, install Ubuntu today and you have all the languages you’ll ever need. How is code vomit vibe coding helping? Also LLMs are very expensive to run right now, it’s the worst example.

      I’m running an LLM and a transcription service (audio -> text of my notes, synced via syncthing from mobile phone to server, then processed using n8n and a docker image of whisper-asr-webservice) on an nvidia 3080 GPU in my home, powered (mostly) by our solar panels. I’m exploring new paths, and vibecoding seems like an interesting one to me 🤷

      Last but not least, I hate how all the CEOs, managers, companies, and random people try to: pretend that open-source does not exist, change the meaning of the word open-source by associating it with binary blobs, and show developers as selfish people (“tech wizards”) who want to keep the technology for themselves.

      I’m not sure that I agree with this statement.

      You don’t want to learn how computer works and it’s fine, it’s your right, but don’t pretend it’s anyone’s fault.

      I guess I didn’t think I was blaming anyone here.

      My vision for the future is one where it’s more equitable–where digital algorithms don’t govern our lives like they (primarily at the hands of corporations) do today. I’m exploring what vibecoding might mean if it emancipates people to contribute to the ruleset that is often hidden from their view, especially when they don’t have computer/technical expertise (but also by just being a human being in this era, when mobile phones, social media, and unhealthy relating with devices are ubiquitous and basically just “expected” of you).

  • @kittenroar@beehaw.org
    link
    fedilink
    English
    0
    edit-2
    3 months ago

    If you want to advance humanity through free libre software, look at the FLOSS movement; that’s kinda their whole thing. Releasing a small piece of software on GitHub and providing some decent documentation on it is a nice thing to do.

    Also, yeah, programming with an llm can speed things up, but you have to know enough to recognize when the llm is hampering you and you have to just roll up your sleeves and code the damn thing yourself. They’re improving, but they are still kinda stupid and they lie.