Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Select an option

  • Save supercrazysam/d846b5ea72e00517728f7954613f019c to your computer and use it in GitHub Desktop.

Select an option

Save supercrazysam/d846b5ea72e00517728f7954613f019c to your computer and use it in GitHub Desktop.
integration
by adding
print("="*40)
print("obs")
print(type(obs))
print(obs)
print("="*40)
print("action")
print(type(action))
print(action)
print("="*40)
before the "return action" from RVO policy at
https://github.com/mit-acl/gym-collision-avoidance/blob/6045c0255aa8e5e1b2062291ac59f72d7ee340c8/gym_collision_avoidance/envs/policies/RVOPolicy.py#L122
The print response (3 iteration) during simulation will be:
========================================
obs
<class 'dict'>
{'is_learning': array(False), 'num_other_agents': array(23), 'dist_to_goal': array(8.183), 'heading_ego_frame': array(0.392), 'pref_speed': array(1.), 'radius': array(0.5), 'other_agents_states': array([[ 2.284, -1.008, 0. , 0. , 0.5 , 1. , 1.496],
[ 1.334, 3.012, 0.091, -0.252, 0.5 , 1. , 2.295],
[-1.323, -3.022, 0.289, 0.204, 0.5 , 1. , 2.299],
[ 3.615, 1.998, 0. , 0. , 0.5 , 1. , 3.131],
[ 0.963, -4.03 , 0.292, 0.257, 0.5 , 1. , 3.143],
[ 4.477, -1.849, -0.469, 0.883, 0.5 , 1. , 3.844],
[ 5.722, 0.915, -0.873, -0.488, 0.5 , 1. , 4.795],
[ 3.231, -4.886, 0.171, 0.985, 0.5 , 1. , 4.858],
[ 2.684, 6.023, 0.043, -0.351, 0.5 , 1. , 5.594],
[-2.657, -6.061, 0.45 , 0.3 , 0.5 , 1. , 5.618],
[ 4.966, 5.007, 0.005, -0.389, 0.5 , 1. , 6.052],
[-0.384, -7.067, 0.32 , 0.358, 0.5 , 1. , 6.078],
[ 6.72 , -2.885, -0.671, 0.741, 0.5 , 1. , 6.313],
[ 7.99 , -0.052, -0.966, -0.26 , 0.5 , 1. , 6.99 ],
[ 5.464, -5.898, -0.225, 0.974, 0.5 , 1. , 7.04 ],
[ 7.124, 3.902, -0.615, -0.789, 0.5 , 1. , 7.123],
[ 1.924, -7.948, 0.265, 0.964, 0.5 , 1. , 7.177],
[ 9.389, 2.908, -0.716, -0.698, 0.5 , 1. , 8.829],
[ 4.183, -8.95 , 0.253, 0.967, 0.5 , 1. , 8.879],
[ 4.037, 9.053, 0.08 , -0.535, 0.5 , 1. , 8.912],
[ 6.309, 8.046, -0.061, -0.472, 0.5 , 1. , 9.225],
[ 8.47 , 6.956, -0.536, -0.844, 0.5 , 1. , 9.96 ],
[10.773, 5.928, -0.546, -0.838, 0.5 , 1. , 11.297],
[ 0. , 0. , 0. , 0. , 0. , 0. , 0. ]])}
========================================
action
<class 'numpy.ndarray'>
[ 0.371 -0.163]
========================================
========================================
obs
<class 'dict'>
{'is_learning': array(False), 'num_other_agents': array(23), 'dist_to_goal': array(12.438), 'heading_ego_frame': array(0.106), 'pref_speed': array(1.), 'radius': array(0.5), 'other_agents_states': array([[ 1.504, -1.995, 0.38 , 0.082, 0.5 , 1. , 1.498],
[ 2.63 , 1.992, 0.266, -0.032, 0.5 , 1. , 2.299],
[-2.647, -2.001, 0.539, 0.042, 0.5 , 1. , 2.318],
[ 4.131, -0.002, 0. , 0. , 0.5 , 1. , 3.131],
[-1.155, -3.989, 0.454, 0.156, 0.5 , 1. , 3.152],
[ 3.066, -3.849, 0.63 , 0.776, 0.5 , 1. , 3.921],
[ 0.43 , -5.884, 0.702, 0.712, 0.5 , 1. , 4.9 ],
[ 5.636, -1.807, 0.022, 1. , 0.5 , 1. , 4.918],
[ 5.265, 3.97 , -0.044, -0.264, 0.5 , 1. , 5.594],
[ 6.761, 1.972, 0. , 0. , 0.5 , 1. , 6.042],
[ 4.52 , -5.822, 0.28 , 0.96 , 0.5 , 1. , 6.371],
[ 7.088, -3.805, -0.224, 0.975, 0.5 , 1. , 7.044],
[ 8.071, -0.002, -1. , -0. , 0.5 , 1. , 7.071],
[ 1.914, -7.861, 0.693, 0.721, 0.5 , 1. , 7.09 ],
[ 9.578, -1.952, -0.97 , 0.244, 0.5 , 1. , 8.775],
[ 7.912, 5.94 , -0.134, -0.327, 0.5 , 1. , 8.894],
[ 9.408, 3.939, -0.185, -0.342, 0.5 , 1. , 9.199],
[10.753, 1.922, -0.921, -0.388, 0.5 , 1. , 9.923],
[12.245, -0.052, -0.966, -0.259, 0.5 , 1. , 11.245],
[10.572, 7.923, -0.191, -0.506, 0.5 , 1. , 12.212],
[12.064, 5.936, -0.284, -0.383, 0.5 , 1. , 12.445],
[13.417, 3.93 , -0.88 , -0.475, 0.5 , 1. , 12.981],
[14.926, 1.908, -0.886, -0.464, 0.5 , 1. , 14.048],
[ 0. , 0. , 0. , 0. , 0. , 0. , 0. ]])}
========================================
action
<class 'numpy.ndarray'>
[0.397 0.121]
========================================
========================================
obs
<class 'dict'>
{'is_learning': array(False), 'num_other_agents': array(23), 'dist_to_goal': array(18.196), 'heading_ego_frame': array(-0.144), 'pref_speed': array(1.), 'radius': array(0.5), 'other_agents_states': array([[ 1.019, -2.267, 0.477, 0.052, 0.5 , 1. , 1.486],
[ 3.022, 1.37 , 0.351, -0.041, 0.5 , 1. , 2.318],
[ 4.05 , -0.907, 0.389, -0.003, 0.5 , 1. , 3.151],
[ 2.148, -4.465, 0.841, 0.541, 0.5 , 1. , 3.955],
[ 5.166, -3.06 , 0.786, 0.619, 0.5 , 1. , 5.004],
[ 6.026, 2.735, 0.252, -0.089, 0.5 , 1. , 5.618],
[ 7.052, 0.459, 0. , 0. , 0.5 , 1. , 6.067],
[ 3.16 , -6.72 , 0.835, 0.551, 0.5 , 1. , 6.426],
[ 6.151, -5.304, 0.484, 0.875, 0.5 , 1. , 7.123],
[ 8.122, -1.633, 0.241, 0.971, 0.5 , 1. , 7.285],
[ 9.099, -3.901, -0.004, 1. , 0.5 , 1. , 8.9 ],
[ 9.031, 4.085, -0.101, -0.248, 0.5 , 1. , 8.912],
[10.051, 1.806, 0. , 0. , 0.5 , 1. , 9.212],
[10.895, -0.408, -0.976, 0.219, 0.5 , 1. , 9.903],
[11.936, -2.642, -0.892, 0.452, 0.5 , 1. , 11.225],
[12.047, 5.424, -0.203, -0.29 , 0.5 , 1. , 12.212],
[13.066, 3.143, -0.256, -0.293, 0.5 , 1. , 12.439],
[13.934, 0.879, -0.984, -0.176, 0.5 , 1. , 12.962],
[14.956, -1.374, -0.999, -0.041, 0.5 , 1. , 14.019],
[15.078, 6.774, -0.298, -0.452, 0.5 , 1. , 15.53 ],
[16.096, 4.507, -0.361, -0.311, 0.5 , 1. , 15.715],
[16.975, 2.252, -0.963, -0.27 , 0.5 , 1. , 16.124],
[18.003, -0.052, -0.966, -0.258, 0.5 , 1. , 17.003],
[ 0. , 0. , 0. , 0. , 0. , 0. , 0. ]])}
========================================
action
<class 'numpy.ndarray'>
[0.729 0.045]
========================================
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment