Srinivas Chetlur is the Director of Data Integration & Analytics at Westinghouse Nuclear. Prior to starting his current position at Westinghouse over four years ago, Srinivas spent more than 16 years in the aerospace industry. Over the course of his career, Srinivas has established and led data and analytics programs at Westinghouse and Thyssenkrupp. Throughout his roles as Director, Senior Manager and Continuous Improvement Manager, he has supported all aspects of operations, manufacturing, supply chains, and engineering. His work has also included leading cross-functional and enterprise-wide efforts to improve data quality and implement digital transformation initiatives.

Thank you, Srinivas, for joining the Q and A session today. The first question I wanted to ask is about our alumni, the University of Washington. There is an on-campus club at the UW called, the Business Impact Group. We both participated in BIG, you as a mentor and I as a student. BIG provides pro-bono consulting for local small businesses. What did you see students doing right as a part of their consulting engagements with local businesses? And what did you see them doing wrong?

[Srinivas] It’s been a few years but I fondly remember that engagement. The first thing is, the students were really engaged with what they were doing. So that was positive. And the other one that I remember is that the two companies, with whom the students were working with, made sure that they gained alignment with their client right from the beginning. I remember there were multiple touch points. They [the students] would say, this is what we’re thinking, this is what we’re doing. So from a doing-it-right perspective, I’d say that maintaining alignment all through the project was a big plus. And it showed. And when we did the presentation, it was well received. So I think that stakeholder engagement was a big positive. I don’t think there was anything majorly negative. We did have some feedback for students, but not from a business perspective. 

I’d like to ask a question about your early career. You started out implementing Lean methodologies, but later on shifted into analytics. What drove your decision to shift into analytics. You’ve also mentioned that “data is a gateway to innovation”, what experiences or discoveries during your career led you to this conclusion?

[Srinivas] One thing that I realized quite early is, when you are managing operations or doing continuous improvement, whether it’s Lean or Six [Sigma], a key part of what you do is measurement because that’s how you know where you are – that’s how you show the impact that you had through your improvement initiative. So measurement is the lifeblood of continuous improvement. And that’s pretty much what led me to analytics and BI. 

A vast majority of the projects we did were driven by the desire to answer questions that we could not before. Whether they were business problems, understanding what happened or understanding why something happened. In order to answer these questions, we need numbers, and that means we need data. However, in many cases the numbers did not exist because, for instance, the business processes on the shop floor were not mature enough to provide us the right numbers. 

The constraints we experienced and overcame, including our resource and technology limitations, drove us to constantly seek ways that we could get the data we needed so that we could understand our business processes. This leads to the idea of data as a gateway to innovation. Where we developed innovations in order to get the data, and were in turn able to develop subsequent innovations after we got the data. This concept is what led me to this whole idea of data as a gateway to innovation. 

“Data is a gateway to innovation.”

Were the measurement limitations you saw more of an issue with designing the kinds of operational processes needed to gather the data. I.e. having people on the shop floor, write down a number when a certain event happened, or was it a matter of building out the technology and integrations required to ingest and analyze that data?

[Srinivas] I would say both. One example would be what you just mentioned, having an MES system, a Manufacturing Execution System, where employees record events on the shop floor and gain visibility of how long certain processes are taking, certain events that happen, etc. And then the other aspect was developing new reports and dashboards.

At ThyssenKrupp, you reduced business decision time from over 45 days to less than a week with the KPI reporting capabilities. What barriers that limited decision making ability and why did they take so much time? Moreover, how was data able to improve and reduce the time required to make those strategic decisions?

[Srinivas] The big problem was the timely availability of data. So the reason why it was taking 30 to 45 days was that key data was only available once a month. Because of this, our reaction time was delayed. If you can only get your hands on a number once a month, then that delay will drive a certain lag in decision making. The key thing that changed was to make the data available more often. In some cases, we reduce reporting to a day, versus once a month. So, for instance, our team then knew how many orders we were shipping on time. 

Similarly, there were a few other KPIs that we could report on a weekly basis, versus monthly. So by reducing reporting turnaround times to a day, or at most a week, in terms of data availability, we made sure that the business could react in a timely manner. So if you saw a downward trend in your deliveries, for instance, then you could immediately initiate corrective or preventive actions. This could be done sooner in the month, so that you could still hit your goal by the end of the month. That was the motivation behind reducing the reaction time and making data available more often. These efforts led us to reduce the business decision cycle down from 45 days to less than a week.

As a product manager at Amazon web services, we saw that too. Our product team needed customer retention and growth numbers on a daily basis to see how effective certain product releases were. However, often the reporting was only available on a monthly basis. So it was difficult to understand which product releases had more impact on our customer growth. 

Two switch gears and quickly discuss Westinghouse; 65% of businesses fail within their 1st 10 years. Westinghouse has been around for over 100 years. What has the role of data and analytics been in enabling this company to stay relevant and have there been any specific data initiatives that have directly contributed to this company’s ability to stay relevant in the marketplace?

[Srinivas] Westinghouse is a very long lived company, largely because of its perspective on data and analytics. What has enabled it to stay relevant, other than its core engineering and products, has been its use of data and analytics. The Balance Scorecard, for example, is one of the key levers that upper management uses to drive success through data. The Balance Scorecard, holds teams accountable to key priorities, such as cost reduction initiatives. This tracks how each team ultimately impacts the bottom line. 

The company has, however, had its share of issues. We all know it went through a bankruptcy seven years ago. But it came back stronger because leadership used a data driven approach to focus on key areas of improvement. 

Regarding successful data initiatives at Westinghouse, there has been continued focus on transformation initiatives, namely safety and cost reduction initiatives. It’s the nuclear industry, so there’s a lot of focus on safety.

Our initiatives also focus on customer growth. This includes understanding what the market needs from a nuclear perspective and using that information to develop newer products. You must have seen the recent interest in Small Modular Reactors (SMRs). And you know Amazon and Microsoft especially, have started purchasing nuclear energy to drive their AI initiatives. These are some of the initiatives, all of which have been data-driven, that have helped the company to stay relevant.

Where do you see the most impact? Are there any specific data integration types or initiatives you’ve seen disproportionately contribute to supporting your companies priorities?

[Srinivas] Yes, quite a few. One example is, how we have systematically transformed the way we track customer demand, and report that to the entire company. Other initiatives include providing data to different parts of the company to improve operations. This has included helping teams optimize staffing, manage material procurement, and manage installed capacity at manufacturing plants. There’s great examples of integrating various data elements to make sure that we are operationally ready to service our customer demand.

As the leader of Data and Analytics at Westinghouse, what are your priorities for your organization? Also, how have your priorities evolved over time?

[Srinivas] I group our priorities into four streams of work. But before I get there, let me define what I mean by ‘priority’. Our priorities are formed by looking at the entire data life cycle. This data lifecycle is what leads to our four priorities. You have to look at how data is generated, and follow it all the way to how and why it is used. Moreover, there is a focus on core areas, including underlying data quality and then the different competencies and roles required to achieve our priorities. The data lifecycle drives our priorities, and our priorities are using data to achieve our business outcomes.

The four streams of work that define my priorities are transformation, delivery, quality and governance and data competency. I spoke earlier about the types of transformation initiatives we’re doing. The second stream is improving information delivery. For instance, we started ingesting all this data, how are we reporting that data to the right stakeholders so that they can use it for decision making? The third stream is, how do we constantly improve the data quality across the organization? Finally, the fourth stream pertains to data competency. It’s a basic requirement to have focused effort around improving the data literacy and competency among our Data Analysts, Data Scientists, et cetera. 

As far as how these priorities have evolved over time? 

When I first started, I had a similar framework. But when I started implementing this road map, or when the rubber hit the road, we found that some areas required more attention than others. So while we started off with a focus on data quality and information delivery, it soon revealed itself that the underlying processes that generate the data were not as strong as we would have liked them to be. We found that these underlying processes were having negative impacts on downstream processes and data quality. We then changed the priorities to focus on basic process optimization so that we could shore up data quality. That’s generally how the priorities have evolved. But it still is part of the overall framework that we put together initially.

How are those priorities reflected in your roadmap? 

[Srinivas] From a roadmap perspective, we take a three year approach. But foremost, the key priority is to make sure that the stakeholders understand the value of the initiatives on our roadmap. That they see what kind of value each roadmap item offers. Therefore when we start, it’s always about making sure that each transformation, or rather, business outcome, is clearly defined. That’s what forms the basis of our roadmap. From there, we start prioritizing projects that have higher value, in terms of business outcomes, into our road map. 

We also prioritize problems that are impacting multiple organizations. That’s the other value that my program brings in. And because of where we sit, rather than just focusing on something that’s just impacting one division, we focus on things that are impacting multiple divisions so that we can offer cross-team or cross-division collaboration. 

Then, we also try to prioritize initiatives that improve data quality, so that they are not seen as just academic exercises. This also includes initiatives across all aspects of the data, including governance and and competency. It also includes information delivery, where stakeholders can really see the data for the first time. Those are some of the initiatives that come to the top of the roadmap.

How do you measure business impact? 

[Srinivas] The first thing we try to focus on are metrics that are a part of the Balance Scorecard. That’s the easiest way to sell initiatives to the leadership team, and my peers. We have a very common metric, ‘online delivery’, on the Balance Scorecard. Similarly, we have another metric pertaining to ‘inventory’ management. Since inventory belongs to the budget management section of our scorecard, initiatives that positively impact this area can then be tied to the budget adherence portion of the Balance Scorecard. These are the ways we make sure projects are always linked to one or more priorities within the Balance Scorecard and are more likely to demonstrate business outcomes.

What is the CEOs method of driving the data nature of the company? Is the Balance Scorecard their direct approach to driving a data driven culture?

[Srinivas] The Balance Scorecard usually has anywhere from a dozen to 15 different metrics. Currently, these metrics are spread across like five categories. Usually safety is there upfront, and there also quality, financial, delivery and people categories, as well. So there are five common buckets and each one has one or more metrics that it’s evaluated by on a monthly basis.

These metrics then drive a whole bunch of diagnostics. If something is not up to standards or a target isn’t being hit, then leadership will ask, what happened? Why did this happen? And how can we fix it? This in turn drives subsequent actions, including the proposal of new initiatives needed in order to meet or exceed the target. After the goals have been set, they are then cascaded down the organization.

This process occurs both at the company level, and at key divisions as well. Operations, which is 2/3 of the company in terms of headcount, is where my program sits. We have an independent Balance Scorecard because we have multiple divisions.

Earlier, you mentioned that you see data as the driver of innovation.You’ve mentioned in the past that you lead through transformative leadership. This all kind of rolls up into this idea of the need for a culture of data. Can you define what a culture of data is and what that might look like?

[Srinivas] Data culture means you have, for example, the right incentives at all levels of the organization that are focused on data. So that’s one, the other one I would say is that the KPIs related to the data are included in the top level, company Balance Scorecards. 

There are other aspects, including high data literacy and knowledge of data concepts at the leadership level. We’re not talking technical skills, but there needs to be some key things that leadership understands about data. 

Another aspect is the existence of a data-driven ecosystem that is constantly nurtured. Once it’s established, it needs to be nurtured so that people at all levels of the organization know key decisions are data-driven. These are some of the hallmarks of a data culture, or data-driven culture.

What has been your approach to fostering this kind of environment and culture? Is it something that you develop in your people or do you hire for it?

[Srinivas] It’s definitely been a journey. But my approach has been, first of all, getting the buy-in from my peers and my boss. The other thing has been focusing on some key areas like, for example, data quality. People instinctively knew what high quality means. Like everyone knows what a high quality TV, or computer looks like, but they didn’t instinctively know what data quality means. How do you measure and actually quantify data quality? How do you find it? 

So I put together a program around teaching leaders about the attributes of data, including data quality. This program opened their eyes, and there was an ‘Oh, wow’ moment, where people realized that this is what data quality is. 

This resulted in us implementing an entire data literacy program. The program we implemented measures levels of data proficiency, starting at a yellow belt and escalating up to a green belt. We’ve not only established this program, but have also weaved the program into concepts that leaders already understand. 

If I had just focused on educating leadership about data quality, then the risk would have been that it would’ve just been perceived as an academic exercise. But by incorporating data quality into transformation initiatives we were able to demonstrate actual business impact. This is how you get the data culture going in the right direction. 

To your follow up question, around hiring versus developing existing talent?

I would say both. But I definitely would not discount the skills we already have. Westinghouse is an engineering heavy company, so there is definitely a strong skill set already present here. With the right skill set already in place, it has just been a matter of training them in these newer concepts so that we do things the right way. But once they’ve learned, like ‘oh yeah, this is, this is data quality, I get it now, this is how it impacts’, then they can apply their skill sets in the right direction.

Now, there’s also the problem solving mentality that has been important to foster. Explaining to people how solving fragmented problems is always an issue and results in technical debt. For example, explaining what happens when you take a fragmented problem and build without an overarching vision. These are just a few of the ideas that have helped us move in the right direction when it comes to establishing a data culture.

Would it be possible to discuss the data literacy program you put in place at Westinghouse? What are the pillars of this program, and what have been the outcomes of from this program being in place?

[Srinivas] Broadly speaking, there are two tracks, if you will. The first track is more technical and is handled by a different division. This track involves training people in hard skills like, Python, R, visualization or other data science concepts. 

The track I focused more on was what I call ‘data literacy’. It was very different from the data science based training. Here I focused mainly on things like, how does data-driven decision-making work? Which steps are involved? Where do we start? What do we need to communicate in order to drive data-driven decisions? There were key aspects that we definitely wanted to incorporate; this included asking the right questions, taking those questions and forming them into mathematical problems, acquiring the data, and using the data to communicate the answers. 

The second area I focused on was data quality. Why, what, and how do you define it? What do we do that have impacts on the quality? How do we overcome those and things like that? So that helped with a lot of things that the team didn’t realize. We also focused on data governance and things like that. 

Then the last aspect that we covered in this was the general data-culture/ data strategy. What does it mean when you say, we have strategy, what aspects does it cover? 

The program was meant for people in leadership roles because I strongly felt that they should be data literate. At the same time, however, I also told people that if their job involved the injection of data, and their outputs were in data — and there are many business processes like that — then they should take the course. For people who worked with data, the course helped them drive the right behavior, in their respective teams. 

What have been the results, what have you seen as a result of the program? Do you have any anecdotes coming from the teams who have adopted this program and been able to develop a data culture?

[Srinivas] The one area is the focus on data quality, even though it’s by no means complete. But the way that people are willing to assign their team to these projects saying, ‘yes, we need data quality improved in that area. So let me give you three people here and then put them on these projects and help them do those projects’. So that’s been a positive impact and they’ve actually shown impact through numbers. They took a key data set, they measured it and said, ‘hey, we are at X today. But we have done all these things and today we are much better in terms of our data quality. And by the way, by doing this, I’m having to spend less time cleaning up some other stuff at a later date’. So these are some examples of the impacts of the program that I’ve seen. 

The other aspect is a whole bunch of data visualization projects where teams are doing the right things in terms of asking, ‘hey, where’s the data coming from? And is the underlying process spreadsheet that’s on someone’s desktop or is it coming directly from our system of record?’ And they are actually moving towards that? They’re like, ‘Okay, I don’t want to use the spreadsheet. Let me go and get what’s into our SAP system, because that’s our system of record’. So those kinds of behavior even though there are a few, but moving in that direction is what these things have accomplished. And then, the willingness to focus on maturing the processes. So that’s another by-product I’ve seen of the program. It’s been incremental progress, but when you’re talking about a bigger organization, I think it takes time. That’s why we use this analogy now; ‘crawl, walk, run and then fly’.

“When you’re talking about a bigger organization… we use this analogy now; crawl, walk, run and then fly.”

Leadership will often say the best way to change and develop data culture is to start small. How do you bridge the gap between strategy and execution? What do you recommend to a leader who wants to initiate this change?

[Srinivas] In this context, we often hear about the need for starting small or achieving quick wins. However, there are definitely certain pitfalls in that approach. Before starting, I think it’s important to first establish and define the value proposition. You first have to know why the program should exist in the first place. By first establishing the value and identifying the areas that will specifically benefit, the initiative won’t be vague. Be very specific, and identify what areas this program will benefit and what value will be realized. 

You can do this by establishing key buckets of value. Make them simple, and don’t do anything more than the most important items. From there, you can define the kind of work the program will do. This includes identifying which areas the program will target. It cannot be everything, so when it comes to data and analytics programs, which areas will it target? And then how will it work? How will people see what each work stream accomplishes?

It’s important to establish the value of an initiative, because with a clear value proposition, a project could create all the value in the world, but it won’t go through the pipeline if no one is able to understand it’s value. Why does this program exist? What is the defining value? What areas will the program target? And once those are answered, how will we work? What will you exactly see? What kind of projects will be initiated? 

This process may go through phases of refinement. When I first came into Westinghouse, it was a new program. I spent several months establishing the first cut of the vision, and road map. Then, when we actually started implementing it, there was some good feedback. When this happens, you realize things that you did not catch during the first pass because now you’re digging into the weeds. On the initial pass, even people with good intentions, sometimes can’t completely grasp the vision. But once you actually start executing, they say ‘oh, this is what you meant by that’, and then everything changes a bit. 

And I believe the second iteration or version two is always stronger. So that is how you reduce the gap between theory and execution. Again, it’s about picking those projects or initiatives which people can not just relate to, but also see the impact of. Once you’ve done that, then frame it in such a way that you’re incorporating certain data aspects into it. That’s the best way to reduce the gap between your road map and execution.

“It’s about picking those projects or initiatives which people can not just relate to, but also see the impact of.”