Transportation Research Record
Công bố khoa học tiêu biểu
* Dữ liệu chỉ mang tính chất tham khảo
Each year from 1998 to 2007, an average of approximately 4,800 pedestrians were killed and 71,000 pedestrians were injured in traffic crashes in the United States. Because many pedestrian crashes occur at roadway intersections, it is important to understand the intersection characteristics that are associated with pedestrian crash risk. The present study uses detailed pedestrian crash data and pedestrian volume estimates to analyze the pedestrian crash risk at 81 intersections along arterial and collector roadways in Alameda County, California. The analysis compares pedestrian crash rates (the number of crashes per 10,000,000 pedestrian crossings) with intersection characteristics. In addition, more than 30 variables were considered for use in the development of a statistical model of the number of pedestrian crashes reported at each study intersection from 1998 to 2007. After the pedestrian and motor vehicle volumes at each intersection were accounted for, negative binomial regression showed that significantly more pedestrian crashes occurred at intersections with more right-turn-only lanes, more nonresidential driveways within 50 ft (15 m), more commercial properties within 0.1 mi (161 m), and a greater percentage of residents within 0.25 mi (402 m) who were younger than age 18 years. Raised medians on both intersecting streets were associated with lower numbers of pedestrian crashes. These results, viewed in combination with other research findings, can be used by practitioners to design safer intersections for pedestrians. This exploratory study also provides a methodological framework for future pedestrian safety studies.
A research study was conducted to evaluate and quantify the effect of highway capacity improvements on travel demand. Statistical models using Nationwide Personal Transportation Survey data were designed to estimate relationships between average household travel time and vehicle-miles of travel. Several regression models were estimated, and the results were stratified by urbanized area, public transportation availability, metropolitan area size, family life cycle, day-of-week of travel, and population density. Travel-time elasticities of -0.3 to -0.5 were generally found, after taking into account the effects of household size, income, population density, and household employment. These results suggest that travelers will spend 30 to 50 percent of the time savings afforded by highway improvements in additional travel. Overall, the results of this study provide evidence that highway capacity improvements can create additional travel, although the magnitude of the induced traffic effect was found to be smaller than that reported by some previous researchers.
Some empirical findings are presented on the relationship between urban form and work trip commuting efficiency, drawn from the analysis of 1986 work trip commuting patterns in the greater Toronto area. Work trip commuting efficiency is measured with respect to the average number of vehicle kilometers traveled (VKT) per worker in a given zone. Preliminary findings include VKT per worker increases as one moves away from both the central core of the city and from other high-density employment centers within the region; job-housing balance, per se, shows little impact on commuting VKT; and population density, in and of itself, does not explain variations on commuting VKT once other urban structure variables have been accounted for.
Dynamic user equilibrium has received considerable theoretical attention for morning peak-period travel but very little for the evening peak. In an attempt to redress this imbalance, morning and evening travel are characterized and compared by using Vickrey’s bottleneck model. To focus ideas, it is assumed that morning and evening travel differ in just one respect: scheduling preferences for the morning are defined in terms of arrival time at work, whereas preferences for the evening are defined in terms of departure time from work. Sufficient conditions are identified for the existence and uniqueness of a deterministic dynamic user equilibrium in terms of departure times for the morning and evening peaks. These conditions, which go well beyond previous work, involve relatively general assumptions about the schedule delay cost functions for morning and evening and essentially no restrictions on the degree of heterogeneity in trip-timing preferences of travelers. Plausibility of the conditions is examined in light of the limited empirical evidence. A numerical example is developed at length to illustrate the importance of traveler heterogeneity and the extent of differences between morning and evening in the time pattern of departures and aggregate travel costs.
Eco-approach and departure is a complex control problem wherein a driver’s actions are guided over a period of time or distance so as to optimize fuel consumption. Reinforcement learning (RL) is a machine learning paradigm that mimics human learning behavior, in which an agent attempts to solve a given control problem by interacting with the environment and developing an optimal policy. Unlike the methods implemented in previous studies for solving the eco-driving problem, RL does not require prior knowledge of the environment to be learned and processed. This paper develops a deep reinforcement learning (DRL) agent for solving the eco-approach and departure problem in the vicinity of signalized intersections for minimization of fuel consumption. The DRL algorithm utilizes a deep neural network for the RL. Novel strategies such as varying actions, prioritized experience replay, target network, and double learning were implemented to overcome the expected instabilities during the training process. The results revealed the significance of the DRL algorithm in reducing fuel consumption. Interestingly, the DRL algorithm was able to successfully learn the environment and guide vehicles through the intersection without red light running violation. On average, the DRL provided fuel savings of about 13.02% with no red light running violations.
One problem associated with the operation of electric vehicles (EVs) is the limited battery, which cannot guarantee their endurance. The increasing electricity consumption will also impose a burden on economy and ecology of the vehicles. To achieve energy saving, this paper proposes an adaptive eco-driving method in the environment of signalized corridors. The framework with adaptive and real-time control is implemented by the reinforcement learning technique. First, the operation of EVs in the proximity of intersections is defined as a Markov Decision Process (MDP) to apply the twin delayed deep deterministic policy gradient (TD3) algorithm, to deal with the decision process with continuous action space. Therefore, the speed of the vehicle can be adjusted continuously. Second, safety, traffic mobility, energy consumption, and comfort are all considered by designing a comprehensive reward function for the MDP. Third, the simulation study takes Aoti Street in Nanjing City with several consecutive signalized intersections as the research road network, and the state representation in MDP considers the information from consecutive downstream traffic signals. After the parameter tuning procedure, simulations are carried out for three typical eco-driving scenarios, including free flow, car following, and congestion flow. By comparing with default car-following behavior in the simulation platform SUMO and several state-of-the-art deep reinforcement learning algorithms, the proposed strategy shows a balanced and stable performance.
Plug-in hybrid electric vehicles (PHEVs) show great promise in reducing transportation-related fossil fuel consumption and greenhouse gas emissions. Designing an efficient energy management system (EMS) for PHEVs to achieve better fuel economy has been an active research topic for decades. Most of the advanced systems rely either on a priori knowledge of future driving conditions to achieve the optimal but not real-time solution (e.g., using a dynamic programming strategy) or on only current driving situations to achieve a real-time but nonoptimal solution (e.g., rule-based strategy). This paper proposes a reinforcement learning–based real-time EMS for PHEVs to address the trade-off between real-time performance and optimal energy savings. The proposed model can optimize the power-split control in real time while learning the optimal decisions from historical driving cycles. A case study on a real-world commute trip shows that about a 12% fuel saving can be achieved without considering charging opportunities; further, an 8% fuel saving can be achieved when charging opportunities are considered, compared with the standard binary mode control strategy.
Eco-driving behavior is able to improve vehicles’ fuel consumption efficiency and minimize exhaust emissions, especially with the presence of infrastructure-to-vehicle (I2V) communications for connected vehicles. Several techniques such as dynamic programming and neural networks have been proposed to study eco-driving behavior. However, most techniques need a complicated problem-solving process and cannot be applied to dynamic traffic conditions. Comparatively, reinforcement learning (RL) presents great potential for self-learning to take actions in a complicated environment to achieve the optimal mapping between traffic conditions and the corresponding optimal control action of a vehicle. In this paper, a vehicle was treated as an agent to select its maneuver, that is, acceleration, cruise speed, and deceleration, according to dynamic conditions while approaching a signalized intersection equipped with I2V communication. An improved cellular automation model was utilized as the simulation platform. Three parameters, including the distance between the vehicle and the intersection, signal status, and instant vehicle speeds, were selected to characterize real-time traffic state. The total CO2 emitted by the vehicle on the approach to the intersection serves as a measure of reward policy that informs the vehicle how good its operation was. The Q-learning algorithm was utilized to optimize vehicle driving behaviors for eco-driving. Vehicle exhaust emissions and traffic performance (travel time, stop duration, and stop rate) were evaluated in two cases: (1) an isolated intersection, and (2) a medium-scale realistic network. Simulation results showed that the eco-driving behavior obtained by RL can not only reduce emissions but also optimize traffic performance.
Because of the development of scientific technology, drivers now have access to a variety of information to assist their decision making. In particular, an accurate prediction of travel time is valuable to drivers, who can use it to choose a route or decide on departure time. Although many researchers have sought to enhance their accuracy, such predictions are often limited by errors that result from the lagged pattern of predicted travel time, the use of nonrepresentative samples for making predictions, and the use of inefficient and nontransferable models. The proposed model predicts travel times on the basis of the k nearest neighbor method and uses data provided by the vehicle detector system and the automatic toll collection system. By combining these two sets of data, the model minimizes the limitations of each set and enhances the prediction's accuracy. Criteria for traffic conditions allow the direct use of data acquired from the automatic toll collection system as predicted travel time. The proposed model's predictions are compared with the predictions of other models by using actual data to show that the proposed model predicts travel times much more accurately. The proposed model's predictions of travel time are expected to be free from the problems associated with an insufficient number of samples. Further, unlike the widely used artificial neural network and Kalman filter methods, the proposed model does not require long training programs, so the model is easily transferable.
- 1
- 2
- 3
- 4
- 5
- 6
- 10