We know from past research that riders often report very different satisfaction levels, trip planning processes, and expectations for their MBTA usage. We decided to examine our Customer Opinion Panel data to see if these previously observed satisfaction differences are reflected in our panel and to begin thinking about possible causes of those differences.
The first, and most complicated, step in analyzing the relationship between mode and satisfaction is to categorize our riders by primary mode of transport. Although some MBTA riders consistently use the same mode (e.g. a Commuter Rail rider who takes the same train to work every day), most MBTA riders take multiple modes as needed for their trip. Primary mode could be calculated for the rider’s overall behavior (over time, what’s the most frequent mode they take?), or for their most recent trip. Both classifications are useful for answering different research questions.
We regularly capture data on customer satisfaction through our monthly panel survey in which riders report on their most recent trip on the T. From those responses, we are able to categorize their recent transit usage. For some respondents, determining which mode of transport they primarily use is easy. If someone reports taking the Orange Line and the Red Line, they can comfortably be classified as a rapid transit rider. If another respondent reports only taking the route 22 bus, they are a bus rider.
However, suppose a rider reports a trip in which they started on the 1 bus, transferred to the Orange Line, and finally hopped on the Worcester Commuter Rail. Are they more likely to think like a bus rider? Like a rapid transit rider? Like a commuter rail rider? How do we classify those who take complicated trips?
We created four categories into which riders could be separated: (1) commuter rail riders, (2) bus and rapid transit riders, (3) rapid transit only riders, and (4) bus only riders.
In past research, we determined that if a respondent reported riding the commuter rail for any leg of their trip, regardless of modes taken for other legs, their opinions on a variety of questions (e.g. crowding, reliability, frequency) are very similar to riders who only report using Commuter Rail and no additional modes. Therefore, if a respondent reported riding the commuter rail at any part of their trip, they would be classified as a commuter rail rider.
The commuter rail is more expensive than the other MBTA modes, the rides are often longer, and the trains are less frequent. These three factors often make the commuter rail the most influential part of a rider’s trip-planning process. To give an example, a rider taking the Green Line to the commuter rail may be more sensitive to delays on the Green Line because they must arrive at North Station by a scheduled time.
For those trips that were exclusively comprised of rides on the subway, respondents were classified as rapid transit-only riders, and likewise for bus-only riders. For the purposes of this study, Silver Line counted as a bus.
For those who normally ride only the core portion of the MBTA, it is fairly common to observe transfers between buses and the subway. In past research, there has not been a clear connection of these bus-and-subway riders to either bus-only or subway-only riders’ opinions. Since there is not a particularly good reason to classify those who ride both modes as either bus riders or rapid transit riders, and because the sample size would be large enough, we created a joint category.
That left us with our ferry riders, approximately 0.4% of the MBTA’s ridership. Sometimes ferry riders display similar behaviors to commuter rail riders and are then lumped together (this is the case with trip-planning behaviors). However, ferry riders report their experiences and satisfaction very differently from commuter rail riders, making a joint category unreliable. But unfortunately, ferry riders are too small in number in our survey panel to constitute a category in themselves. Ultimately, we decided that if ferry riders reported a second mode, they would be classified based on that leg of their trip. If they exclusively reported taking the ferry, their responses were not included in this analysis. (Sign up to join the panel and increase our sample size!)
Because this classification process relies on a single reported ride, all survey respondents are reclassified each month. The range of monthly respondents in each category between July 2015 and July 2016 are displayed in Table 1.
Table 1: Average Monthy Responses by Mode
|Rapid Transit and Bus||84|
We chose three metrics, all on a seven point scale, to look at customer satisfaction:
- Overall satisfaction with the MBTA asks respondents to rate the MBTA based on their experience over time, essentially their “average trip.”
- Trip satisfaction asks respondents to rate the single ride they reported in the survey. The reason we ask both trip satisfaction and overall satisfaction is because a particular trip may be better, worse, or pretty similar to a rider’s perceived “average trip” on the MBTA. In fact, most of our data shows that on average, riders report lower overall satisfaction than trip satisfaction.
- The reliability perception rating asks respondents to rate their agreement with the statement: “The MBTA provides reliable public transportation services.”
The results of these three metrics over the course of a year are shown in the graphs below. In those graphs, the blue line shows the weighted average rating for all riders, while the solid lines represent the average within the four categories we separated riders into.
In the Overall Satisfaction results, you can see that commuter rail riders generally report lower satisfaction rates when compared to other riders. However, the difference is not particularly drastic, and on some months, rapid transit and bus riders report the lowest overall satisfaction rates. The rates for different riders tend to trend together, with the exception of the month of February 2016, where most rider categories report an increase in satisfaction, but commuter rail riders report a particularly striking decrease.
Overall, the trip satisfaction averages are higher than overall satisfaction averages. We see pretty similar trends in these first two graphs between modes: different rider categories trending together with the exception of February ’16, and commuter rail riders’ generally low satisfaction rates.
Once again, commuter rail riders reported the lowest satisfaction with the MBTA’s reliability. Although this can be seen in the other metrics, the reliability perception most clearly demonstrates that respondents who ride the bus are the most satisfied with MBTA services and reliability. Also interestingly, the average responses for the reliability perception are scaled down almost an entire point on the seven point scale when compared to the other metrics.
The differences we see in the above four graphs between riders of different modes and their reported satisfaction leaves a lot of room to ask: “Why?”
Particularly interesting are the gaps in satisfaction between commuter rail riders, who tended to report the lowest satisfaction rates, and bus riders, who tended to report the highest. If we look at actual reliability performance, commuter rail performs much better than buses do on a day-to-day basis. However, we cannot compare these measures because commuter rail performance and bus performance are measured very differently. Given the different nature of commuter rail and bus service it makes sense that reliability is perceived differently.
Perhaps expectations for service differ because of fundamental differences in the mode themselves, and not the riders. The very reasons we created a separate commuter rail category (that commuter rail is more expensive, runs less often, cancellations and delays can be more impactful to commutes), could be the same reasons that satisfaction and perception vary so much.
Or, could it be possible that there is a difference between riders themselves that lead to such different satisfaction rates? While we know that there are demographic differences, particularly in ethnicity, income, and age between various modes, is it possible that these characteristics alter satisfaction rates? At first glance, there doesn’t seem to be a strong relationship between demographics and satisfaction, particularly because the modes themselves vary demographically. Hopefully, with renewed demographic, especially income, data from our panel respondents, we will be able to take a more in depth look at this relationship.
Most likely, it is a mix of both the fundamental characteristics of riders and the services themselves that create variability in satisfaction levels.
Although we were unable to draw any conclusive results as to why satisfaction levels differ by mode, this analysis did give us some very important insight into how we should do future analyses. If we only look at customer satisfaction data overall, we may be missing some key insights that mode separated analysis may highlight. All future analyses should also try to look at modes separately to be able to address different mode needs appropriately.