• tias@discuss.tchncs.de
      link
      fedilink
      arrow-up
      51
      arrow-down
      3
      ·
      2 years ago

      If you can do this, do it. It’s a huge boost to performance thanks to infinitely lower latency.

      • jmcs@discuss.tchncs.de
        link
        fedilink
        arrow-up
        9
        ·
        2 years ago

        And infinitely lower reliability because you can’t have failovers (well you can, but people that run everything in the same host, won’t). It’s fine for something non critical, but I wouldn’t do it with anything that pays the bills.

        • tias@discuss.tchncs.de
          link
          fedilink
          arrow-up
          19
          ·
          2 years ago

          I work for a company that has operated like this for 20 years. The system goes down sometimes, but we can fix it in less than an hour. At worst the users get a longer coffee break.

          A single click in the software can often generate 500 SQL queries, so if you go from 0.05 ms to 1 ms latency you add half a second to clicks in the UI and that would piss our users off.

          Definitely not saying this is the best way to operate at all times. But SQL has a huge problem with false dependencies between queries and API:s that make it very difficult to pipeline queries, so my experience has been that I/O-bound applications easily become extremely sensitive to latency.

          • Katana314@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            2 years ago

            I’m going to guess quite a people here work on businesses where “sometimes breaks, but fixed in less than an hour” isn’t good enough for reliability.

              • trxxruraxvr@lemmy.world
                link
                fedilink
                arrow-up
                3
                ·
                2 years ago

                Most businesses dont require that kind of uptime though. If i killed or servers for a couple of hours between 02:00 and 04:00 every night probably nobody would notice for at least a year if it wasn’t for the alerts we’d get.

  • esc27@lemmy.world
    link
    fedilink
    arrow-up
    47
    ·
    2 years ago

    Random guess, a php error caused Apache to log a ridiculous number of errors to /var/log and on this system that isn’t its own partition so /var filled up crashing MySQL. The user wiped /var/log to free up space.

    • harmsy@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      2 years ago

      That’s not far off of something that happened to me once a few years ago. My computer suddenly started struggling one day, and I quickly figured out that my hard drive suddenly had 500 gigs or so of extra data somewhere. I had to find a tool that would let me see how much space a given folder was taking up, and eventually I found an absolutely HUMONGOUS error log file. After I cleared it out, the file rapidly filled up again when I used a program I’d been using all the time. I think it was Minecraft or something. Anyway, my duck tape solution was to just make that log file read-only, since the error in question didn’t actually affect anything else.

  • hakunawazo@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    2 years ago

    Like everytime with natives, it was a race condition cascade of table locks followed by mysql suicide caused by bad cronjob scripts implemented by the user.