I think it’s generally a brilliant solution but there are a couple of problems here:
The scanner seems to flag fucking everything and charge for minor damage where a human would probably flag it as wear.
No one is allowed to correct the scanner:
Perturbed by the apparent mistake, the user tried to speak to employees and managers at the Hertz counter, but none were able to help, and all “pointed fingers at the ‘AI scanner.’” They were told to contact customer support — but even that proved futile after representatives claimed they “can’t do anything.”
Sounds to me like they’re just trying to replace those employees. That’s why they won’t let them interfere.
I use an app called GoMore in some places in Europe that allows you to rent cars from other peers. The rental process is cheaper and faster–everything is done through the app–and you avoid these shady corpo practices.
I’m not sure how you can make the points you make, and still call it a “generally brilliant solution”
The entire point of this system - like anything a giant company like Hertz does - is not to be fair to the customer. The point is to screw the customer over to make money.
Not allowing human employees to challenge the incorrect AI decision is very intentional, because it defers your complaint to a later time when you have to phone customer support.
This means you no longer have the persuasion power of being there in person at the time of the assessment, with the car still there too, and means you have to muster the time and effort to call customer services - which they are hoping you won’t bother doing. Even if you do call, CS hold all the cards at that point and can easily swerve you over the phone.
Stop lights are meant to direct traffic. If someone runs a red light, is the technology not working as it should?
The technology here, using computer vision to automatically flag potential damage, needed to be implemented alongside human supervision - an employee should be able to walk by the car, see that the flagged damage doesn’t actually exist, and override the algorithm.
The technology itself isn’t bad, it’s how hertz is using it that is.
I believe the unfortunate miscommunication here is that when @Ulrich@feddit.org said the solution was brilliant, they were referring to the technology as the “solution”, and others are referring to the implementation as a whole as the “solution”
I think it’s generally a brilliant solution but there are a couple of problems here:
Sounds to me like they’re just trying to replace those employees. That’s why they won’t let them interfere.
Sounds like they want to lose those customers.
But they know their competitions are doing to adopt the same type of tech, so where are those customers going to go when they have no choice?
I use an app called GoMore in some places in Europe that allows you to rent cars from other peers. The rental process is cheaper and faster–everything is done through the app–and you avoid these shady corpo practices.
For now till the shit that happens with Airbnb happens there. With the corporations just renting all the cars.
I’m not sure how you can make the points you make, and still call it a “generally brilliant solution”
The entire point of this system - like anything a giant company like Hertz does - is not to be fair to the customer. The point is to screw the customer over to make money.
Not allowing human employees to challenge the incorrect AI decision is very intentional, because it defers your complaint to a later time when you have to phone customer support.
This means you no longer have the persuasion power of being there in person at the time of the assessment, with the car still there too, and means you have to muster the time and effort to call customer services - which they are hoping you won’t bother doing. Even if you do call, CS hold all the cards at that point and can easily swerve you over the phone.
It’s all part of the business strategy.
Because the technology itself is not the problem, it’s the application. Not complicated.
The technology is literally the problem as it’s not working
There’s literally nothing wrong with the technology. The problem is the application.
The technology is NOT DOING WHAT ITS MEANT TO DO - it is IDENTIFYING DAMAGE WHERE THERE IS NONE - the TECHNOLOGY is NOT working as it should
Do you hold everything to such a standard?
Stop lights are meant to direct traffic. If someone runs a red light, is the technology not working as it should?
The technology here, using computer vision to automatically flag potential damage, needed to be implemented alongside human supervision - an employee should be able to walk by the car, see that the flagged damage doesn’t actually exist, and override the algorithm.
The technology itself isn’t bad, it’s how hertz is using it that is.
I believe the unfortunate miscommunication here is that when @Ulrich@feddit.org said the solution was brilliant, they were referring to the technology as the “solution”, and others are referring to the implementation as a whole as the “solution”
The stop light analogy would require the stop light be doing something wrong not the human element doing something wrong because.
There is no human element to this implantation, it is the technology itself malfunctioning. There was no damage but the system thinks there is damage.
Yes, that’s exactly what I’m saying. That’s the problem with the implementation.