I’m curious how software can be created and evolve over time. I’m afraid that at some point, we’ll realize there are issues with the software we’re using that can only be remedied by massive changes or a complete rewrite.

Are there any instances of this happening? Where something is designed with a flaw that doesn’t get realized until much later, necessitating scrapping the whole thing and starting from scratch?

  • @biribiri11@lemmy.ml
    link
    fedilink
    361 year ago

    The entire thing. It needs to be completely rewritten in rust, complete with unit tests and Miri in CI, and converted to a high performance microkernel. Everything evolves into a crab /s

  • @nycki@lemmy.world
    link
    fedilink
    36
    edit-2
    1 year ago

    Starting anything from scratch is a huge risk these days. At best you’ll have something like the python 2 -> 3 rewrite overhaul (leaving scraps of legacy code all over the place), at worst you’ll have something like gnome/kde (where the community schisms rather than adopting a new standard). I would say that most of the time, there are only two ways to get a new standard to reach mass adoption.

    1. Retrofit everything. Extend old APIs where possible. Build your new layer on top of https, or javascript, or ascii, or something else that already has widespread adoption. Make a clear upgrade path for old users, but maintain compatibility for as long as possible.

    2. Buy 99% of the market and declare yourself king (cough cough chromium).

      • @flying_sheep@lemmy.ml
        link
        fedilink
        31 year ago

        In a good way. Using a non-verified bytes type for strings was such a giant source of bugs. Text is complicated and pretending it isn’t won’t get you far.

  • Maybe not exaclly Linux, sorry for that, but it was first thing that get to my mind.
    Web browsers really should be rewritten, be more modular and easier to modify. Web was supposed to be bulletproof and work even if some features are not present, but all websites are now based on assumptions all browsers have 99% of Chromium features implemented and won’t work in any browser written from scratch now.

    • @intrepid@lemmy.ca
      link
      fedilink
      251 year ago

      The same guys who create Chrome have stuffed the web standards with needlessly bloated fluff that makes it nearly impossible for anyone else to implement it. If alternative browsers have to be a thing again, we need a new standard, or at least the current standard with significantly large portions removed.

    • @MonkderDritte@feddit.de
      link
      fedilink
      41 year ago

      Agreed. I mean, metadata should be protocol stuff, not document stuff. And rendering (font size etc) should be user side, not developer side. Browser should be modular, not a monolith. Creating a webpage should be easy again.

  • @limelight79@lemm.ee
    link
    fedilink
    281 year ago

    We haven’t rewritten the firewall code lately, right? checks Oh, it looks like we have. Now it’s nftables.

    I learned ipfirewall, then ipchains, then iptables came along, and I was like, oh hell no, not again. At that point I found software to set up the firewall for me.

    • @catloaf@lemm.ee
      link
      fedilink
      English
      81 year ago

      Damn, you’re old. iptables came out in 1998. That’s what I learned in (and I still don’t fully understand it).

    • TXL
      link
      fedilink
      41 year ago

      I was just thinking that iptables lasted a good 20 years. Over twice that of ipchains. Was it good enough or did it just have too much inertia?

      Nf is probably a welcome improvement in any case.

  • @gnuhaut@lemmy.ml
    link
    fedilink
    221 year ago

    GUI toolkits like Qt and Gtk. I can’t tell you how to do it better, but something is definitely wrong with the standard class hierarchy framework model these things adhere to. Someday someone will figure out a better way to write GUIs (or maybe that already exists and I’m unaware) and that new approach will take over eventually, and all the GUI toolkits will have to be scrapped or rewritten completely.

    • Lung
      link
      fedilink
      111 year ago

      Idk man, I’ve used a lot of UI toolkits, and I don’t really see anything wrong with GTK (though they do basically rewrite it from scratch every few years it seems…)

      The only thing that comes to mind is the React-ish world of UI systems, where model-view-controller patterns are more obvious to use. I.e. a concept of state where the UI automatically re-renders based on the data backing it

      But generally, GTK is a joy, and imo the world of HTML has long been trying to catch up to it. It’s only kinda recently that we got flexbox, and that was always how GTK layouts were. The tooling, design guidelines, and visual editors have been great for a long time

    • TXL
      link
      fedilink
      51 year ago

      Newer toolkits all seem to be going immediate mode. Which I kind of hate as an idea personally.

      • Joe Breuer
        link
        fedilink
        181 year ago

        Which - in my considered opinion - makes them so much worse.

        Is it because writing native UI on all current systems I’m aware of is still worse than in the times of NeXTStep with Interface Builder, Objective C, and their class libraries?

        And/or is it because it allows (perceived) lower-cost “web developers” to be tasked with “native” client UI?

    • @MonkderDritte@feddit.de
      link
      fedilink
      21 year ago

      and all the GUI toolkits will have to be scrapped or rewritten completely

      Dillo is the only tool i know still using FLTK.

  • @Hector@lemmy.ca
    cake
    link
    fedilink
    211 year ago

    Some form of stable, modernized bluetooth stack would be nice. Every other bluetooth update breaks at least one of my devices.

  • @MrAlternateTape@lemm.ee
    link
    fedilink
    211 year ago

    It’s actually a classic programmer move to start over again. I’ve read the book “Clean Code” and it talks about a little bit.

    Appereantly it would not be the first time that the new start turns into the same mess as the old codebase it’s supposed to replace. While starting over can be tempting, refactoring is in my opinion better.

    If you refactor a lot, you start thinking the same way about the new code you write. So any new code you write will probably be better and you’ll be cleaning up the old code too. If you know you have to clean up the mess anyways, better do it right the first time …

    However it is not hard to imagine that some programming languages simply get too old and the application has to be rewritten in a new language to ensure continuity. So I think that happens sometimes.

    • @teawrecks@sopuli.xyz
      cake
      link
      fedilink
      111 year ago

      Yeah, this was something I recognized about myself in the first few years out of school. My brain always wanted to say “all of this is a mess, let’s just delete it all and start from scratch” as though that was some kind of bold/smart move.

      But I now understand that it’s the mark of a talented engineer to see where we are as point A, where we want to be as point B, and be able to navigate from A to B before some deadline (and maybe you have points/deadlines C, D, E, etc.). The person who has that vision is who you want in charge.

      Chesterton’s Fence is the relevant analogy: “you should never destroy a fence until you understand why it’s there in the first place.”

      • @sepulcher@lemmy.caOP
        link
        fedilink
        21 year ago

        “you should never destroy a fence until you understand why it’s there in the first place.”

        I like that; really makes me think about my time in building-games.

  • @MonkderDritte@feddit.de
    link
    fedilink
    21
    edit-2
    1 year ago

    Alsa > Pulseaudio > Pipewire

    About 20 xdg-open alternatives (which is, btw, just a wrapper around gnome-open, exo-open, etc.)

    My session scripts after a deep dive. Seriously, startxfce4 has workarounds from the 80ies and software rot affected formatting already.

    Turnstile instead elogind (which is bound to systemd releases)

    mingetty, because who uses a modem nowadays?

  • @taladar@sh.itjust.works
    link
    fedilink
    111 year ago

    I would say the whole set of C based assumptions underlying most modern software, specifically errors being just an integer constant that is translated into a text so it has no details about the operation tried (who tried to do what to which object and why did that fail).

    • You have stderr to throw errors into. And the constants are just error codes, like HTTP error codes. Without it how computer would know if the program executed correctly.

      • @atzanteol@sh.itjust.works
        link
        fedilink
        English
        01 year ago

        You throw an exception like a gentleman. But C doesn’t support them. So you need to abuse the return type to also indicate “success” as well as a potential value the caller wanted.

        • @uis@lemm.ee
          link
          fedilink
          3
          edit-2
          1 year ago

          So you need to abuse the return type to also indicate “success” as well as a potential value the caller wanted.

          You don’t need to.

          Returnung structs, returning by pointer, signals, error flags, setjmp/longjmp, using cxa for exceptions(lol, now THIS is real abuse).

    • @teawrecks@sopuli.xyz
      cake
      link
      fedilink
      51 year ago

      You mean 0 indicating success and any other value indicating some arbitrary meaning? I don’t see any problem with that.

      Passing around extra error handling info for the worst case isn’t free, and the worst case doesn’t happen 99.999% of the time. No reason to spend extra cycles and memory hurting performance just to make debugging easier. That’s what debug/instrumented builds are for.

      • @atzanteol@sh.itjust.works
        link
        fedilink
        English
        21 year ago

        Ugh, I do not miss C…

        Errors and return values are, and should be, different things. Almost every other language figured this out and handles it better than C.

        • @teawrecks@sopuli.xyz
          cake
          link
          fedilink
          31 year ago

          It’s more of an ABI thing though, C just doesn’t have error handling.

          And if you do exception handling wrong in most other languages, you hamstring your performance.

        • @uis@lemm.ee
          link
          fedilink
          01 year ago

          Errors and return values are, and should be, different things.

          That’s why errno and return value are different things.

      • @taladar@sh.itjust.works
        link
        fedilink
        21 year ago

        Passing around extra error handling info for the worst case isn’t free, and the worst case doesn’t happen 99.999% of the time.

        The case “I want to know why this error happened” is basically 100% of the time when an error actually happens.

        And the case of “Permission denied” or similar useless nonsense without any details costing me hours of my life in debugging time that wouldn’t be necessary if it just told me permission for who to do what to which object happens quite regularly.

        • @teawrecks@sopuli.xyz
          cake
          link
          fedilink
          -11 year ago

          “0.001% of the time, I wanna know every time 👉😎👉”

          Yeah, I get that. But are we talking about during development (which is why we’re choosing between C and something else)? In that case, you should be running instrumented builds, or with debug functionality enabled. I agree that most programs just fail and don’t tell you how to go about enabling debug info or anything, and that could be improved.

          For the “Permission Denied” example, I also assume we’re making system calls and having them fail? In that case it seems straight forward: the user you’re running as can’t access the resource you were actively trying to access. But if we’re talking about some random log file just saying “Error: permission denied” and leaving you nothing to go on, that’s on the program dumping the error to produce more useful information.

          In general, you often don’t want to leak more info than just Worked or Didn’t Work for security reasons. Or a mix of security/performance reasons (possible DOS attacks).

          • @taladar@sh.itjust.works
            link
            fedilink
            01 year ago

            During development is just about the only time when that doesn’t matter because you have direct access to the source code to figure out which function failed exactly. As a sysadmin I don’t have the luxury of reproducing every issue with a debug build with some debugger running and/or print statements added to figure out where exactly that value originally came from. I really need to know why it failed the first time around.

            • @teawrecks@sopuli.xyz
              cake
              link
              fedilink
              11 year ago

              Yeah, so it sounds like your complaint is actually with application not propagating relevant error handling information to where it’s most convenient for you to read it. Linux is not at fault in your example, because as you said, it returns all the information needed to fix the issue to the one who developed the code, and then they just dropped the ball.

              Maybe there’s a flag you can set to dump those kinds of errors to a log? But even then, some apps use the fail case as part of normal operation (try to open a file, if we can’t, do this other thing). You wouldn’t actually want to know about every single failure, just the ones that the application considers fatal.

              As long as you’re running on a turing complete machine, it’s on the app itself to sufficiently document what qualifies as an error and why it happened.

              • @taladar@sh.itjust.works
                link
                fedilink
                11 year ago

                The whole point of my complaint is that shitty C conventions produce shitty error messages. If I could rely on the programmer to work around those stupid conventions every time by actually checking the error and then enriching it with all relevant information I would have no complaints.

              • @taladar@sh.itjust.works
                link
                fedilink
                01 year ago

                I know about strace, strace still requires me to reproduce the issue and then to look at backtraces if nobody bothered to include any detail in the error.

                • @uis@lemm.ee
                  link
                  fedilink
                  01 year ago

                  Somehow (lack of) backtrace and details in error is “C based assumption”

      • @taladar@sh.itjust.works
        link
        fedilink
        11 year ago

        It does very much have the concept of objects as in subject, verb, object of operations implemented in assembly.

        As in who (user foo) tried to do what (open/read/write/delete/…) to which object (e.g. which socket, which file, which Linux namespace, which memory mapping,…).

        • @uis@lemm.ee
          link
          fedilink
          1
          edit-2
          1 year ago

          implemented in assembly.

          Indeed. Assembly is(can be) used to implement them.

          As in who (user foo) tried to do what (open/read/write/delete/…) to which object (e.g. which socket, which file, which Linux namespace, which memory mapping,…).

          Kernel implements it in software(except memory mappings, it is implemented in MMU). There are no sockets, files and namespaces in ISA.

          • @taladar@sh.itjust.works
            link
            fedilink
            11 year ago

            You were the one who brought up assembly.

            And stop acting like you don’t know what I am talking about. Syscalls implement operations that are called by someone who has certain permissions and operate on various kinds of objects. Nobody who wants to debug why that call returned “Permission denied” or “File does not exist” without any detail cares that there is hardware several layers of abstraction deeper down that doesn’t know anything about those concepts. Nothing in the hardware forces people to make APIs with bad error reporting.

              • @taladar@sh.itjust.works
                link
                fedilink
                11 year ago

                Because if a program dies and just prints strerror(errno) it just gives me “Permission denied” without any detail on which operation had permissions denied to do what. So basically I have not enough information to fix the issue or in many cases even to reproduce it.

                • @uis@lemm.ee
                  link
                  fedilink
                  0
                  edit-2
                  1 year ago

                  It may just not print anything at all. This is logging issue, not “C based assumption”. I wouldn’t be surprised if you will call “403 Forbidden” a “C based assumtion” too.

                  But since we are talking about local program, competent sysadmin can strace program. It will print arguments and error codes.

    • KryptonBlur
      link
      fedilink
      English
      41 year ago

      What are the advantages of Zig? I’ve seen lots of people talking about it, but I’m not sure I understand what it supposedly does better.

  • @mlg@lemmy.world
    link
    fedilink
    English
    101 year ago

    Not too relevant for desktop users but NFS.

    No way people are actually setting it up with Kerberos Auth

  • @Hawke@lemmy.world
    link
    fedilink
    8
    edit-2
    1 year ago

    There are many instances like that. Systemd vs system V init, x vs Wayland, ed vs vim, Tex vs latex vs lyx vs context, OpenOffice vs libreoffice.

    Usually someone identifies a problem or a new way of doing things… then a lot of people adapt and some people don’t. Sometimes the new improvement is worse, sometimes it inspires a revival of the old system for the better…

    It’s almost never catastrophic for anyone involved.