Crowd-sourced Autonomous Vehicle Training

Definitions: When driving, human drivers encounter non-events (expected events like traffic lights turning red) as well as events (unexpected events like an unaccompanied child standing at the edge of the road).

Events are captured from crowd-sourced participants driving their own cars during their routine lives. The determination that a particular scenario is an unexpected event is carried out automatically (without manual input) by sensing eye, foot and hand positions and movements, and comparing it to a map. This arrangement can not only sense the driver’s actions to change speed or direction in response to such unexpected events, it can also detect the driver’s intentions to do so. It can also determine various driving attributes and mental components of the driver. This in turn allows drivers to be scored and ranked in each geographical region, so that expert drivers can be identified.

Signatures for events extracted from the driving patterns of such expert drivers are more reliable, and form better training routines for improving autonomous vehicle software. A database of thousands of such signatures, and their variations, are obtained by crowd-sourced expert drivers. These signatures are then used by autonomous vehicles to identify potential or real events, and react to these events in a human-like manner.

Companies developing AVs need not wait for a history of “millions of miles tested”, but instead can rely on extremely quick, cheap and highly reliable, human-like training sub-routines extracted from crowd-sourced drivers.

This scheme allows millions of volunteer drivers to transform their routine driving into event-capturing runs, with no training or expensive equipment required.

• Method 1: Crowd-sourced volunteer participants use their own cars to acquire thousands of scenarios very quickly. Initial screening of potential participants is done using a virtual vehicle test online coupled with the participant’s smartphone to capture eye movements. Entry level participants are not required to have any modifications to their vehicles or deploy additional sensors (other than their own smartphones). Participants are promoted to the next level up as data collection progresses. Expert drivers are identified, and provided with additional sensors (total cost around $100-$500 per car), including road and driver facing cameras, hand contact area and force sensor pad (wrapped around the steering), and position sensors on the brake and accelerator pedals.
• Method 2: Same as method 1 above, but using virtual vehicles. Of course, data will be far less reliable and of lower quality, but it can be good for evaluation or testing.

PCT Application: WO 2019095013

There are multiple allowed inventions (ISR/ISO by AU: Jan’19, Demand: Feb’19, clear IPER: May’19):

1. Independent: Using eye movements of drivers to train autonomous vehicles (broad claim).
Dependent:
i. coupled with data related to hand grip and contact area on steering wheel.
ii. coupled with distances of accelerator and brake pedals from the wall behind them as well as from the driver’s foot.
iii. coupled with the use of aural data (sounds that the driver hears, like ambulance sirens).

2. Independent: Obtaining signatures of events outside vehicles by using eye and foot information, and using the signatures to train autonomous vehicles.
Dependent:
i. coupled with data on hand grip and contact area on steering wheel.
ii. coupled with the use of aural data.

3. Independent: Determining if an event outside a vehicle has occurred by using eye, foot and hand sensor information and comparing them to a map.

4. Independent: Determining if an event outside vehicle has occurred by using eye information, as well as vehicle speed and direction change information, and comparing them to a map.
Dependent:
i. coupled with sensor data on hand grip and contact area on steering wheel.
ii. coupled with the use of aural data.

5. Independent: Driver scoring by combining driver performance during events and non-events. Used to identify expert drivers.


Australian Application under examination: AU 2018267553. Independent claim: Smartphone as an illumination system, with automatic switching of wavelengths to improve contrast and intensity of acquired images of eye movement.

I plan to pursue many more Applications (in the USA, AU, CA) using the same priority documents as the PCT, with earliest priority of 20 November 2017 and later priority: 19 November 2018, with new applications filed as Divisionals (child applications).

Applications list in the field of Autonomous Vehicles:
US 2019/0156134
US 2019/0156150
AU 2018267553
AU 2018267541
CA 2986160
WO 2019095013 (unity: multiple inventions, all examined and clear report issued)

Attached files:
Written Opinion 3.pdf
WO2019095013A1.pdf

Patents:
WO 2,019,095,013 issued 2019-05-17   [MORE INFO]

Type of Offer: Sale or Licensing



« More Automotive Patents

Share on      


CrowdSell Your Patent