Skip to content
This is the code base for the MHRS algorithm
Branch: master
Clone or download
Latest commit 7e4e8c0 Jan 26, 2020
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
Jupyter notebooks Final Reorganization. Jan 23, 2020
csv files Final Reorganization. Jan 23, 2020
eta model Final Reorganization. Jan 23, 2020
README.md Update README.md Jan 26, 2020
dqn_v3.py Final Reorganization. Jan 23, 2020
experiment.py Final Reorganization. Jan 23, 2020
geohelper.py Final Reorganization. Jan 23, 2020
osmloader.py Final Reorganization. Jan 23, 2020
pathgenerator.py Final Reorganization. Jan 23, 2020
preprocess.py Final Reorganization. Jan 23, 2020
simulator_v2.py Final Reorganization. Jan 23, 2020
train_dqn.py Final Reorganization. Jan 23, 2020

README.md

MultiHopRideSharing (MHRS)

Step 1: Deciding Multi-Hop Zones:

Run the Jupyter notebook named deciding_multi_hop_zones.ipynb step-by-step.

Step 1.1.

The inputs are the following:

geohash_path = '.../zones_v2.csv' trip_path = '.../trips_2016-05_v2.csv'

Replace ... with the folder path in your machine. Following is the information about the above two files:

zones_v2.csv: This file has the geo hash zone information which is created using another piece of code (not shared currently). However, this file is made available in the associated OneDrive folder.

trips_2016-05_v2.csv: This file contains the raw data of the New York City Taxi. It is available in the associated OneDrive folder.

The first output generated by this Jupter notebook is:

'.../trips_hop_zones_0721.csv': This file is available in the associated OneDrive folder.

Step 1.2.

The inputs are the following:

hoptrips_path = '.../trips_hop_zones_0721.csv' trip_path = '../trips_2016-05_v2.csv'

Replace ... with the folder path in your machine. Following is the information about the above two files:

trips_hop_zones_0721.csv: This file contained the hop zones information. It is available in the associated OneDrive folder.

trips_2016-05_v2.csv: Same as earlier.

The second set of outputs generate by this Jupyter notebook are:

'.../hoptrips_ratio_2_dist_2.csv' '.../hoptrips_dist_2.csv' '.../hoptrips_ratio_2_dist_3.csv' '.../hoptrips_dist_3.csv'

All the above files are shared in the associated OneDrive folder.

Step 1.3.

The inputs are the following:

hoptrip_path = '.../hoptrips_ratio_2_dist_2.csv'

Replace ... with the folder path in your machine. The above file is one of the outputs from the previous step.

The third output generated by this Jupter notebook is:

'.../hoptrips_all.csv': This file is available in the associated OneDrive folder.

Step 1.4.

The inputs are the following:

hoptrip_path = '.../hoptrips_all.csv' graph_path = '.../nyc_network_graph.pkl'

Replace ... with the folder path in your machine. Following is the information about the above two files:

hoptrips_all.csv: Same as earlier. nyc_network_graph.pkl: New York City Network.

Both these files are made available in the associated OneDrive folder.

The fourth output generated by this Jupter notebook is:

'.../g2mm.csv': This file is available in the associated OneDrive folder.

Step 1.5.

The inputs are the following:

nyc = '.../taxi_zones.shp' osm_path = '.../osm_nyc.json'

Replace ... with the folder path in your machine. The above file is one of the outputs from the previous step.

The fifth output generated by this Jupter notebook is:

'.../zones_hop_v1.csv': This file is available in the associated OneDrive folder.

Step 1.6.

The inputs are the following:

geohash_table = '.../zones_hop_v1.csv'

Replace ... with the folder path in your machine. This file is available in the associated OneDrive folder.

The sixth output generated by this Jupter notebook is:

'.../hoptrips_all_v2.csv': This file is available in the associated OneDrive folder.

Step 1.7

hoptrip_path = '.../hoptrips_all_v2.csv' hoptrip_path = '.../hoptrips_all_v3.csv' geohash_table = '.../zones_hop_v2.csv'

Replace ... with the folder path in your machine. The above files are available in the associated OneDrive folder.

The sixth output generated by this Jupter notebook is:

'.../hoptrips_all_v3.csv: This file is available in the associated OneDrive folder.

Step 2: Demand Prediction:

Run the Jupyter notebook named demand_prediction_2.ipynb step-by-step.

The inputs are the following:

df = '.../trips_2016-05_v2.csv' zones = '.../zones_v2.cs'

Replace ... with the folder path in your machine. This file is available in the associated OneDrive folder.

The output generated by this Jupter notebook is:

'.../model.h5': This file has the demand prediction model and is is available in the associated OneDrive folder.

Step 3: Expected Time to Arrival (ETA) Modeling:

Run the Jupyter notebook named demand_prediction_2.ipynb step-by-step.

The inputs are the following:

data_path = '.../trips_2016-05_v2.csv' path = '.../triptime_predictor.pkl' geohash_table = '.../zones.csv'

Replace ... with the folder path in your machine. All these files are available in the associated OneDrive folder.

The output generated by this Jupter notebook is:

'.../eta.csv': This file has the required eta's and is available in the associated OneDrive folder.

Step 4: Training the Deep Q Network (DQN):

After running the Jupyter notebooks discussed in Steps 1-3, you can train the deep Q network simply by executing the following through the command window:

python -W ignore train_dqn.py

The above Python file requires following inputs:

GRAPH_PATH = '.../nyc_network_graph.pkl'

TRIP_PATH = '.../hoptrips_all_v3.csv'

ETA_MODEL_PATH = '.../triptime_predictor.pkl'

GEOHASH_TABLE_PATH = '.../zones_hop_v2.csv'

SCORE_PATH = '...'

INITIAL_MEMORY_PATH = SCORE_PATH + 'ex_memory_v52.pkl'

Replace ... with the folder path in your machine. All these files are available in the associated OneDrive folder.

The code has dependencies through the following imports:

from simulator_v2 import FleetSimulator from dqn_v3 import Agent from experiment import run, load_trip_chunks, describe

All the codes with dependencies within dependencies are shared here.

The link to the associated OneDrive folder is OneDrive.

© Ashutosh Singh, Abubakr Alabbasi, and Vaneet Aggarwal.

Please cite the following paper if using any part of the code:

  1. A. Singh, A. Alabbasi, and V. Aggarwal, "A distributed model-free algorithm for multi-hop ride-sharing using deep reinforcement learning," arXiv preprint arXiv:1910.14002, Oct 2019 (also in NeurIPS Workshop 2019). @inproceedings{singh2019reinforcement, title={A reinforcement learning based algorithm for multi-hop ride-sharing: Model-free approach}, author={Singh, Ashutosh and Al-Abbasi, AO and Aggarwal, Vaneet}, booktitle={Neural Information Processing Systems (Neurips) Workshop}, year={2019} } @article{singh2019distributed, title={A Distributed Model-Free Algorithm for Multi-hop Ride-sharing using Deep Reinforcement Learning}, author={Singh, Ashutosh and Alabbasi, Abubakr and Aggarwal, Vaneet}, journal={arXiv preprint arXiv:1910.14002}, year={2019} }

Since this code uses codes developed in the papers below, please cite those too.

  1. Abubakr Al-Abbasi, Arnob Ghosh, and Vaneet Aggarwal, "DeepPool: Distributed Model-free Algorithm for Ride-sharing using Deep Reinforcement Learning," IEEE Transactions on Intelligent Transportation Systems, vol. 20, no. 2, pp. 4714-4727, Dec 2019. @article{al2019deeppool, title={Deeppool: Distributed model-free algorithm for ride-sharing using deep reinforcement learning}, author={Al-Abbasi, Abubakr O and Ghosh, Arnob and Aggarwal, Vaneet}, journal={IEEE Transactions on Intelligent Transportation Systems}, volume={20}, number={12}, pages={4714--4727}, year={2019}, publisher={IEEE} }

  2. T. Oda and C. Joe-Wong, "Movi: A model-free approach to dynamic fleet management," IEEE INFOCOM 2018. (Their code is available at https://github.com/misteroda/FleetAI ) @inproceedings{oda2018movi, title={MOVI: A model-free approach to dynamic fleet management}, author={Oda, Takuma and Joe-Wong, Carlee}, booktitle={IEEE INFOCOM 2018-IEEE Conference on Computer Communications}, pages={2708--2716}, year={2018}, organization={IEEE} }

You can’t perform that action at this time.