• @WoodScientist@lemmy.world
    link
    fedilink
    English
    614 months ago

    People, and especially journalists, need to get this idea of robots as perfectly logical computer code out of their heads. These aren’t Asimov’s robots we’re dealing with. Journalists still cling to the idea that all computers are hard-coded. You still sometimes see people navel-gazing on self-driving cars, working the trolley problem. “Should a car veer into oncoming traffic to avoid hitting a child crossing the road?” The authors imagine that the creators of these machines hand-code every scenario, like a long series of if statements.

    But that’s just not how these things are made. They are not programmed; they are trained. In the case of self-driving cars, they are simply given a bunch of video footage and radar records, and the accompanying driver inputs in response to those conditions. Then they try to map the radar and camera inputs to whatever the human drivers did. And they train the AI to do that.

    This behavior isn’t at all surprising. Self-driving cars, like any similar AI system, are not hard coded, coldly logical machines. They are trained off us, off our responses, and they exhibit all of the mistakes and errors we make. The reason waymo cars don’t stop at crosswalks is because human drivers don’t stop at crosswalks. The machine is simply copying us.

    • @SkybreakerEngineer@lemmy.world
      link
      fedilink
      English
      554 months ago

      All of which takes you back to the headline, “Waymo trains its cars to not stop at crosswalks”. The company controls the input, it needs to be responsible for the results.

      • @FireRetardant@lemmy.world
        link
        fedilink
        English
        264 months ago

        Some of these self driving car companies have successfully lobbied to stop citys from ticketing their vehicles for traffic infractions. Here they are stating these cars are so much better than human drivers, yet they won’t stand behind that statement instead they are demanding special rules for themselves and no consequences.

    • snooggums
      link
      fedilink
      English
      274 months ago

      The machine can still be trained to actually stop at crosswalks the same way it is trained to not collide with other cars even though people do that.

    • @tiramichu@lemm.ee
      link
      fedilink
      English
      15
      edit-2
      4 months ago

      I think the reason non-tech people find this so difficult to comprehend is the poor understanding of what problems are easy for (classically programmed) computers to solve versus ones that are hard.

      if ( person_at_crossing ) then { stop }
      

      To the layperson it makes sense that self-driving cars should be programmed this way. Aftter all, this is a trivial problem for a human to solve. Just look, and if there is a person you stop. Easy peasy.

      But for a computer, how do you know? What is a ‘person’? What is a ‘crossing’? How do we know if the person is ‘at/on’ the crossing as opposed to simply near it or passing by?

      To me it’s this disconnect between the common understanding of computer capability and the reality that causes the misconception.

      • @Starbuck@lemmy.world
        link
        fedilink
        English
        84 months ago

        I think you could liken it to training a young driver who doesn’t share a language with you. You can demonstrate the behavior you want once or twice, but unless all of the observations demonstrate the behavior you want, you can’t say “yes, we specifically told it to do that”

      • @AA5B@lemmy.world
        link
        fedilink
        English
        34 months ago

        You can use that logic to say it would be difficult to do the right thing for all cases, but we can start with the ideal case.

        • For a clearly marked crosswalk with a pedestrian in the street, stop
        • For a pedestrian in the street, stop.
      • Iceblade
        link
        fedilink
        English
        34 months ago

        Difference is that humans (usually) come with empathy (or at least self-preservation) built in. With self-driving cars we aren’t building in empathy and self (or at least passenger) preservation, we’re hard-coding in scenarios where the law says they have to do X or Y.

      • snooggums
        link
        fedilink
        English
        34 months ago

        But for a computer, how do you know? What is a ‘person’? What is a ‘crossing’? How do we know if the person is ‘at/on’ the crossing as opposed to simply near it or passing by?

        Most walkways are marked. The vehicle is able to identify obstructions in the road and things on the side of the road that are moving towards the road just like cross street traffic.

        If (thing) is crossing the street then stop. If (thing) is stationary near a marked crosswalk, stop and go if they don’t move in (x) seconds. If they don’t move in a reasonable amount of time, then go.

        You know, the same way people are supposed to handle the same situation.

        • hissing meerkat
          link
          fedilink
          English
          94 months ago

          Most crosswalks in the US are not marked, and in all places I’m familiar with vehicles must stop or yield to pedestrians at unmarked crosswalks.

          At unmarked crosswalks and marked but uncontrolled crosswalks we have to handle the situation with social cues about which direction the pedestrian wants to cross the street/road/highway and if they will feel safer crossing the road after a vehicle has passed than before (almost always for homeless pedestrians and frequently for pedestrians in moderate traffic).

          If waymo can’t figure out if something intends or is likely to enter the highway they can’t drive a car. Those can be people at crosswalks, people crossing at places other than crosswalks, blind pedestrians crossing anywhere, deaf and blind pedestrians crossing even at controlled intersections, kids or wildlife or livestock running toward the road, etc.

    • @tibi@lemmy.world
      link
      fedilink
      English
      44 months ago

      Training self driving cars that way would be irresponsible, because it would behave unpredictably and could be really dangerous. In reality, self driving cars use AI for only some tasks for which it is really good at like object recognition (e.g. recognizing traffic signs, pedestrians and other vehicles). The car uses all this data to build a map of its surroundings and tries to predict what the other participants are going to do. Then, it decides whether it’s safe to move the vehicle, and the path it should take. All these things can be done algorithmically, AI is only necessary for object recognition.

      In cases such as this, just follow the money to find the incentives. Waymo wants to maximize their profits. This means maximizing how many customers they can serve as well as minimizing driving time to save on gas. How do you do that? Program their cars to be a bit more aggressive: don’t stop on yellow, don’t stop at crosswalks except to avoid a collision, drive slightly over the speed limit. And of course, lobby the shit out of every politician to pass laws allowing them to get away with breaking these rules.

      • @svtdragon@lemmy.world
        link
        fedilink
        English
        14 months ago

        According to some cursory research (read: Google), obstacle avoidance uses ML to identify objects, and uses those identities to predict their behavior. That stage leaves room for the same unpredictability, doesn’t it? Say you only have 51% confidence that a “thing” is a pedestrian walking a bike, 49% that it’s a bike on the move. The former has right of way and the latter doesn’t. Or even 70/30. 90/10.

        There’s some level where you have to set the confidence threshold to choose a course of action and you’ll be subject to some ML-derived unpredictability as confidence fluctuates around it… right?

        • @tibi@lemmy.world
          link
          fedilink
          English
          14 months ago

          In such situations, the car should take the safest action and assume it’s a pedestrian.

          • @svtdragon@lemmy.world
            link
            fedilink
            English
            14 months ago

            But mechanically that’s just moving the confidence threshold to 100% which is not achievable as far as I can tell. It quickly reduces to “all objects are pedestrians” which halts traffic.

            • @tibi@lemmy.world
              link
              fedilink
              English
              1
              edit-2
              4 months ago

              This would only be in ambiguous situations when the confidence level of “pedestrian” and “cyclist” are close to each other. If there’s an object with 20% confidence level that it’s a pedestrian, it’s probably not. But we’re talking about the situation when you have to decide whether to yield or not, which isn’t really safety critical.

              The car should avoid any collisions with any object regardless of whether it’s a pedestrian, cyclist, cat, box, fallen tree or any other object, moving or not.