DISCLAIMER: The following is the output of transcribing from an audio recording with the use of AI. Although the transcription is largely accurate, in some cases it is incomplete or inaccurate due to inaudible passages or AI transcription errors. It is posted as an aid to understanding the proceedings of the meeting, but should not be treated as a valid record.
Ryan: Well hey everyone, welcome back to the MAD Data Podcast. My name’s Ryan. I’m the host of the podcast. We have a very special person here today, Jessica Snyder. She’s a program director and of product management over at IBM. How are you doing, Jessie?
Jessie: I’m doing great, Ryan. Thanks for having me today.
Ryan: Yeah, it’s been great to get to know you. Obviously, Databand is now a part of IBM and you should be feel so special because you are our first IBM employee to be a part of this podcast.
Jessie: Woo hoo. I love that. I feel so honored.
Ryan: We had a we had a fun time catching up at the Gartner conference in Orlando. We had a really fun time there, had a really good talks with customers about data fabric, and obviously today we’re going to be talking about the future of data fabric and also data integration. But before we get going here and you don’t have to talk about how Iowa State, Ohio State beat Penn State. We don’t talk about That.
Jessie: Listen, it’s a sore subject.
Ryan: But I would love to get just a little bit about yourself and how you got to be a program director over at IBM. Our audience loves to hear just career paths of how people got into the space, especially data. We are a data podcast.
Jessie: Yeah, of course. So I’ve been at IBM for about ten years. I’ve actually spent my my whole career here. I did two internships with IBM when I was in college. I went to Penn State where my my Penn State sweatshirt today, and I have a major in computer science. So I actually started as a software developer at IBM. I focused primarily on full stack development and then switched into frontend development for a while. And then about five or six years into my career, I kind of figured I’m definitely not going to be the world’s best engineer here. And I just kind of felt like I had I tapped out in my schedule there and I was really curious about how we made decisions about how we were in the business and what what the market opportunities, where I was really interested in how we actually build product and take it to market. So I did a brief stint actually as a chief of staff for one of our senior execs at IBM, which was really great experience, learned a lot of the ins and outs of how we run the business at a high level. And then when I was done with that role, it’s kind of meant to be a kind of launch point into a new chapter in your career. So I was able to take a leadership position as a product manager, and I spent about two years in product management for this portfolio, the portfolio that I now run. And then in January of this year, I was promoted into this position. So I run the data integration portfolio and I have five product managers on my team and they’re awesome. So I love this job. It’s the best job of I’ve ever held. It’s so fun. Every day is a little bit different, which is nice. The key things are really exciting and get to work with tons of great clients and have lots of great relationships. So I really, really love this role and of course recommend product management to anyone who’s interested in getting into this side of the business.
Ryan: And you get to work with really cool people like me.
Jessie: I do. Yes, of course. That was top of the list. I don’t know. I don’t know why I didn’t mention that one.
Ryan: Yeah, well, this is sort of fun working with your team and you have an awesome team over it. Over at the data integration side of the data fabric side, it’s interesting you said that you came from a software engineering background and now you’re in product management. I usually don’t hear that. So that’s that’s cool to hear coming from from that side. I used to be way back in the day. I used to be a software tester.
Jessie: Hey, there you go. That’s a tough job. Yes.
Ryan: Very tough. Yeah. Going and telling a software engineer that your code is wrong is probably not the best way to start a morning.
Jessie: Yes, this was a matter of friction there. Right between QA and development.
Ryan: Yeah. It’s always a battle, right? It shouldn’t be. We would talk about that. You know, I got a break down the silos and even in data as a merry go round of blaming, blaming others is a constant issue we see obviously. But this is kind of segways into our our topic today, which is really around the future of data fabric and data integration. But what I want to do, though, is for people that don’t maybe have an idea of what data fabric is like, give us a rundown on how we got to this idea of data fabric. Because I know I know we’re not going to have conversation about data fabric vs data mesh, but that’s the first thing to ask about. Probably like what’s different?
Jessie: It’s like a very religious debate right there.
Ryan: Religious? Yeah. Give it give us give us a rundown like how we get here. How do we get to this concept of data? Fabric seems to be taking off and obviously IBM plays a major role in that.
Jessie: Yeah, totally. So I think first thing to understand is data. Fabric is definitely a relatively new concept in the market. If you look at some of the major analyst firms like Gartner, for example, they project that data fabric won’t reach maturity for another five or ten years. Still have a ways to go in the market and how enterprises adopt a data fabric architecture. But I think that’s the key word right there. Architecture, data, fabric is an architecture. It’s not a product or solution that someone can sell you. We can sell products that help you implement the data fabric architecture, but it’s really important to know that it’s an architectural paradigm. And so you mentioned the term data mesh and everyone has a little bit of their own definition, by the way, and that’s okay. I think the crux of the issue is that we see a data fabric as being kind of the overall kind of umbrella idea on which you build different use cases. And we see a data mesh as being a use case or implementation of a data fabric where you’re really focusing on domain specific data and you’re creating data what we call data products, which are actually just collections of data that are specific to a particular domain. Usually we’re involving someone from the line of business in creating those, and then we’re creating some sort of delivery mechanism for the line of business to consume that data. But it’s really to us, the data fabric is the piece that everyone kind of needs help with implementation on. And in terms of how we got here. It’s funny, I was at a CTO roundtable a couple of months ago and I had one of the data analytics leaders say to me, How is this any different than some of the kind of paradigms that we’ve seen in the past? We’ve kind of gone through this kind of cyclical nature of different different data architecture, architecture paradigms that are going to help solve all of our problems. Right. We’ve always had this problem with kind of proliferation of data, a siloing of data. So I think the place where data fabric really helps is we’ve never truly been in this type of situation where we have so many massively distributed data ecosystems. In the past, when we were talking about the world that existed, when we had on premises software that sat behind your companies firewall, it was a lot easier to control everything, even if you had tons of different on premises, databases and various data warehouses where you were trying to wrangle data, you were trying to deliver analytical use cases to the business and operational use cases. You still kind of had full control over the data that your enterprise was operating on. You still had full control over the governance of that data, like who had access to what. There weren’t nearly as many cybersecurity concerns. There were concerns around how you manage this data forces in another location potentially. And so now what we’ve seen as the market has evolved, as the industry has evolved, as as a lot of enterprises are starting to adopt cloud native workloads as we’re starting to shift our infrastructure to cloud to help save on cost, we’re starting to see this kind of not only massive proliferation of data across different data landscapes, but also like entire enterprise fracturing where we have, especially if you look at some of the largest clients in the world, you might have different parts of the business standardizing on different tools and standardizing even on different clouds. Like I work with lots of clients who are spread across eight of us and Azure and GCP and IBM Cloud, and they have like 20 different tools for integration. And so it’s kind of become this big mess, right? So the idea of the data fabric and the premise of the data fabric is starting to look at how we can centralize where it makes sense or logically group where it makes sense and start to have a strategy where we’re going to kind of bring those distributed data locations together, whether it’s physically or logically, and start to be able to understand all of the data that our enterprise has access to. And then the idea eventually is to layer in things like automation, intelligence. So you’re eventually getting to this concept of more of like a self-service consumption model so that you’re actually getting the right data into the hands of the right people at the right time.
Ryan: Yeah, that’s when I was, you know, obviously I would database not being a part of IBM, which obviously this is not what this podcast is all about. It’s really just talking to people. We’ll talk about that later. But one of the things that I found really cool about the fabric side was that I was talking to people. They would say, Well, do we have to adopt the whole fabric necessarily to get the advantages of that? And the way I was communicates, it was like, no, like we’re meeting you where you’re at today and where you want to go. Like it’s not like, hey, pick up and move everything right away. It’s hey, like you just said, we’re going to look to get people the right day at the right time, in the right place, within all of the. Cost of the things you have going on today. Maybe we can standardize things, but obviously we’re not saying, hey, we’re going to rip out everything and move yet because I think people are tired of that.
Jessie: Yes, exactly right. So the data fabric definitely has kind of key building blocks that that make it a data fabric. But the whole idea is you don’t have to do a wholesale rip and replace of what you have today. In fact, I think it would be almost impossible for most most companies, even small companies to do that. Right. We’re talking about multiyear journeys. Typically when we talk about a data fabric implementation, especially at a larger enterprise. But the whole idea is you want to start small, right? You want to pick a use case that that your that would have value for your business. And you want to start kind of bringing the pieces to that particular implementation and start to look at what tools that you want to bring in that are going to enable your data fabric architecture. How do you want to set this up in a way that makes sense and then kind of tack on pieces from there? Right. But you definitely don’t want to do a wholesale work and replace. Definitely not. Not a recommendation that we would have.
Ryan: I’m with you on that. So before we get into I know I was obviously you were the one of the main leaders over at the data integration side. Well, we had talked about some or we had mentioned there’s some more common use cases that we see using data. Fabric. What are those? I know we’re going to talk about data integration soon, but what are some of the other ones that you work closely, you work with your colleague or peers on, rather within that within the fabric.
Jessie: So one of the most common ones we see, and this is typically where most of our customers at IBM start when we are working with them on the data fabric implementation is governance and privacy implementation. It’s the most common one. Most often our customers recognize that they have all this different data and it’s controlled by different, different tools, and it’s sitting in different catalogs and it’s sitting in different data warehouses, and they want to kind of create a governance strategy that’s going to work across all of their different tools and all the different locations in which their data resides. So that’s typically the most common where we help them figure out, okay, how, how do you want to kind of apply different governance policies? Because it’s important that certain things do need to be centralized. We definitely don’t want to centralize everything. That was that kind of mistake we had made in the past with that kind of stereotypical data warehouse use case where you bring everything into a data warehouse. We kind of made that a little bit worse with the rise of the data lake, where I heard someone once say, we turned the data lake into a data swamp. We just like threw everything into the data lake with with no structure.
Ryan: Now we have a lake house, right?
Jessie: The lake house? Yes, the lake house is the new structure I’m.
Ryan: Waiting for, like the data roof. Like, that’s my thing. And like, we’re for data roof that goes on top of the lake in the house in the river. And I’m waiting for that.
Jessie: So, you know, but there are certain things in governance that do need to be centralized, and then there are things that can certainly be decentralized. And so we help customers figure out what are what’s the right approach for them, how do we want to apply the right governance policies? How do you start to bring in and like layer in things like the catalog? How do you start to bring in other parts of a data fabric like an integration tool or toolset? We definitely see integration as being one of the crucial backbones of a data fabric because you’re going to have a bunch of different use cases that you’re eventually going to be trying to solve for or implement for. And you’re probably going to be operating with a bunch of different data depending on the size of your enterprise and the industry that you’re operating in. So it’s really important to be able to have or be able to choose the integration style that makes sense for the data that you’re working with and the use case that you’re trying to deliver.
Ryan: I heard you mention style, so I want to talk about this like well, first of all, like what? What are maybe some misconceptions of what data integration means and has that back into maybe some styles that we’ve heard of? I’m sure there’s tons of different new ways of doing this that that pop up. And I don’t want to say them because I don’t spoil it, but I know there’s a bunch of acronyms in my head that I want to throw out there, but I’m okay.
Jessie: Yes. So ETL is definitely the main integration style that we see being used today. It’s it’s the I would say the oldest integration style. It’s been around the longest. We still see there’s plenty of stats from various analyst firms, but I think if I’m quoting Gartner correctly, I think like 80%, 88% of enterprises still use Yelp as their main integration style today. So then there’s the LTE version of that which focuses more on ingestion. That’s been. Has cropped up more recently as a more popular integration style, because usually it also leverages the concept of push down where you can actually do the transformation logic and the compute in the target. Typically a data warehouse or a house that helps offset costs on the integration side, especially if your your integration servers are expensive. So that can help a lot of our customers kind of save on costs. But some of the other ones that we’re seeing, replication is a really big one, right? This idea of kind of real time synchronization of data, you’ll hear the term change data capture quite a lot that falls into the replication space. Data virtualization is another one. Data virtualization kind of started out almost as like Federation and has kind of evolved into a much more advanced type of concept. This one is really great for like data science use cases. Those are really good for data quality too, like being able to profile in place. So virtualization is becoming increasingly important in the kind of integration landscape. And then we have the newest one, I guess I shouldn’t say newest one message. An event oriented integration has existed for a long time, but we’re seeing this kind of kick off the real time integration market, and we’re starting to see this kind of massive trend towards the need for real time integration solutions, where we can process streaming data, real time data based or message oriented data coming from things like Iot devices. Retail is really big with this one. If you think about like anything that’s executing a transaction, any time your swipe and a credit card, click and check out an e-commerce website. There are so many different use cases for streaming or real time data. So that’s probably the the most the newest one. But I think the thing that’s important is that and this is one of the misconceptions that I see is that if you’ve been in this business for a long time, we often find a lot of customers who kind of started with ETL and then they stopped there and now they’re trying to kind of fit a square peg into a round hole by using ETL for all of their new use cases, when in reality, it’s really good to actually take a look at what are you trying to solve? Like what is the problem you’re trying to solve? And then back into the integration style as opposed to just saying, let’s build pipelines for everything.
Ryan: Yeah, that’s that’s something that I mean, don’t we, you know, at least a Databand when we talk to customers, they have a mixture of a lot of this stuff. It’s traditionally ETL, but outside of ETL, the main one that they are experimenting with or is they’re they’re doing right now is the streaming and event based stuff like. Yeah, the matter of the word Kafka.
Jessie: Yes, Kafka is the major player in that space. Right.
Ryan: But again, too, like it’s they’re they’re even talking about it as a lot of times are talking about as they started doing it for a brand new product or a brand new area of the business versus trying to replace their current ETL processes. Like eventually they’ll get there, you would think. But they understand, just like you said, you understand like that’s a big, big task to do and to reconfigure everything. Probably a lot of technical debt that you have to you know, there’s a lot of stuff that you’d have to do to get into line. So I like what you said about just, hey, look for opportunity. Opportunities, experiment on these things and not get so, you know, you know, you say blind sided or tunnel vision with something you’re just familiar with, right? I mean, that’s that’s kind of a common thing, right? Even. And that goes across all disciplines, right? It’s like you have this product marketing framework, you’re going in there and you’ve used it for the past five years to maybe test something else out, you know, for a new product launch, maybe, you know, same thing.
Jessie: Yeah. New workflows are definitely the easiest place and best place to to try out new things. Right, because there’s no technical debt that you’re carrying with it. I mean, you might have your own technical doubt in the sense of like not having the skill on the new tool or the new technology. But that’s a point in time statement, right? Just got to sit down and learn the new thing and then apply that to whatever you’re working on.
Ryan: Well, the next thing I want to talk about, which is kind of the last topic, but really is like the next big thing for data fabric. And obviously this is a little self-serving to people listening. Sorry, but we figured we might as well just talk about be transparent. But it really is this idea around observability and data observability. Yeah. Let us like I mean, give us an idea around how that I’ve seen that fit into the data integration use case as well, kind of going into, you know, the future of data fabric as we know it.
Jessie: Yeah. So observability is definitely one of the hottest topics I am hearing about in my own customer conversations. Sometimes customers don’t even know that they need it or they don’t know what. The word is they just know that there a problem they need to solve. And I think when we talk about the data fabric, one of the core concepts of the data fabric is that automation piece building in automation intelligence, having this concept of the data fabric starting to learn almost and start to help to be able to deliver the right data to the right people at the right time, like I said before. But the issue with that is that one takes you so far you can build in kind of as much intelligence and automation as you as you want. But unless you have something that’s going to give some level of observability into your data pipelines, into your data architecture in general, there is still this question of at the end of the day, when I’m when I’m sitting at the tail end of the data pipeline, right. I’m sitting in the line of business. I’m an analyst. I’m trying to build a report. I’m in finance. I’m trying to look at this this data that’s representing my third quarter results. Right. The thing that matters is if the data is correct and you absolutely need an observability platform to give you that confidence that your data is right. And so the way that we’re starting to talk to our customers about this is helping to understand, like when you look at your data pipelines today, where do you see problems like what? What are the things that you’re struggling with? And a lot of times it is this idea of, I have all these pipelines scheduled and something fails and it blocks the execution of everything else or everything will run. But then all of a sudden, like the volume of data that’s supposed to be moving through this pipeline, pushes hundreds of gigs of data through it overnight. And all of a sudden we’re down to five days of data, or my reports were empty or missing some values and it took me two days to figure out where the break in the system was. And so this concept of data observability, I think, has really kind of grown out of this idea of application observability. I think that’s really where observability started. If you’re thinking about this idea of like application monitoring and observability, really easy to tell when your application goes down, right? It’s like immediately obvious get four or four when you hit that website like something’s wrong. But data is much harder to pinpoint, right? It’s much harder to figure out when something is wrong. And it might be minutes or hours or days, hopefully not weeks, but sometimes weeks before you actually figure out that there’s a problem. And so I think that a lot of organizations are starting to realize after they put in all this time and energy into building a data fabric architecture. And they’re really starting to like get going with this this new kind of concept is that they’re still having this problem of, oh my gosh, my data isn’t right. I don’t have visibility into what what’s being put on to this report, what’s being fed into this machine learning model. And that’s really where observability is going to come in and help.
Ryan: Yeah. You know, I’ve heard this connection to like data downtime or outages or which basically you’re talking about data quality or data reliability problem essentially. But very much like in security, like I used to be, I was in software doing testing software development, and then I moved into security and now I’m at data and I see a lot of similarities between all these fields. Like the first field is soft from a software delivery perspective. It’s like, Hey, we want to go as fast as we can and deploy as much as we can, as fast as we can. And to be competitive and to have any feature updates and to patch things and whatever. Right? So there’s like the speed problem that the software application team feels constantly. That’s exactly what the data team also feels like.
Ryan: People say things. I feel like they feel like they’re under water. They feel like they’re, you know, there’s too much firefighting going on and so on. So that’s also at stake. But then in the security side, a lot of times it’s security. It’s like you don’t know what you don’t know until something blows up, right? It’s like for software, you can pretty much know right away if something from a UX experience perspective like you were saying, you log on to the browser and it’s not there, okay, something’s wrong with the software, but with data you may not know that it really becomes a security problem. It sounds like from the scary perspective, what times belong to know that there’s they have unknown security issues until it gets made public or something bad happens and they’re just, you know, they’re assuming everything’s okay. And I think there’s there’s a lot of similar between those fields and that, you know, they want to go as fast as you can from a software side. But we also want detect and have that figure out the unknowns while we’re going as fast as we can so that we don’t have these, you know, potentially really costly data. And. The business.
Jessie: Yeah. And even operationally. Right. Speaking of speed, when we often see I.T. teams who are completely overloaded. Right. They have a zillion requests they need to get through. Might take honestly months for someone who has requested access to data to get what they need from the team. They’re usually really understaffed, especially after coming off of the heels of the pandemic. We’ve heard about the great resignation. And so these teams are totally overloaded and what they spend the majority of their time on right now, not only just building data pipelines, but debugging data pipelines and trying to figure out where things went wrong. And so if you can help lift that burden off of the team with observability and you can kind of take that meantime to detection from maybe days down to like couple of minutes. It saves so much time for the team and they can really just focus on the work that matters instead of focusing on trying to debug where a problem happened.
Ryan: Yeah, was I was on LinkedIn the other day and there’s this gentleman that works for is an engineer, data engineer over at MIT. And he was discussing LinkedIn, an example of data incident management, which is essentially observability that’s operationalize within your thing. And he said is a this is basically how a triage basically works is you frantically paying a senior data engineer who’s been on the team for for four plus years and asks them for urgent help. Second, if she isn’t available, spend hours debugging a pipeline by spot, checking thousands of tables.
Jessie: Oh, God.
Ryan: Yeah. It’s like. That’s, like, kind of the reality of if you don’t have something in place that is constantly, continuously observing what’s going on with your data, you’re eventually going to run into an issue. And one of the things I like to do is I try to make up sometimes I’m really good at analogies, sometimes I’m terrible at it. But here’s an analogy that I think makes that let’s say this makes sense. But, you know, there’s been a lot of like increase in the popularity of Formula One racing recently. It’s like exploding. Have you been Formula One like a race racing event there?
Jessie: My brother is like so into Formula One and like organizes his whole weekend schedule around the races. But I’ve never been to one.
Ryan: So I haven’t either. But I want to go because people tell it like, Oh, you guys see this Netflix documentary? It’s awesome. It’s like, super cool. Well, here’s the thing with with these cars and I didn’t realize this until recently, but these cars have like hundreds of sensors on the car as it’s driving. And so, you know, it talks about your tire pressure, your engine, your electrical or your brakes or all these things that tie in to the car to alert the team around doing. Do we need to come in for a pit stop or can we keep going as fast as we can and go a little bit further before we actually know to stop? Right.
Ryan: They don’t just drive blindly and look at their watch and go, okay, maybe I should check in with the radio tower, see if I can come in. No, they’re constantly telling them, Hey, it’s about time to come in. You got to get gas, you got to change tires, all these things. And I was thinking about that and I was like, that’s basically what, you know, making the connection to software engineering team, is that what they’re doing? They’re trying to go as fast as they can to win the race, to deliver the results, but at the same time, they need something that’s going to help them in their job, continuously observe what’s going on so they can be alert around, Hey, these are issues that you need to address. And sometimes it could be an alert that you go, You know what, I don’t need that. Thank you for letting me know. But it’s actually fine. Like I’m good. I think the car will make it. We’re okay. We’re going to keep moving. We’ll address at a later date. And the other one is, you better pull over right now or your car is gonna blow up.
Jessie: Yes, that would be bad. Well, Ryan, you said you were bad analogies, but I think that was an excellent analogy for observability.
Ryan: Well, thank you. I appreciate that. Use an analogy a couple of times.
Jessie: Hey, there you go.
Ryan: Well, hey, we we’re coming up on time here. I did want to give a quick plug for the new integration that we have with Databand and IBM data fabric with DataStage. So feel free to give us a quick spiel on that.
Jessie: Yes, we’re so excited about this one. So DataStage is part of the product portfolio that I run. DataStage are a premier ETL tool inside of IBM. We’ve done a ton of work on it over the last three years to completely re-architect it to be cloud native. We’re really excited about. It has a totally new modern experience. We’ve essentially ripped everything out except our super performance engine and built this entire new experience around how you actually build modern data pipelines. So we’re super excited about that one. And we just I think just a couple of weeks ago actually, it was we launched the integration with Databand. So what this gives us is so we can. Take a look at a couple of different things from a data sage perspective. Data Sage has this concept. It’s a new code, low code detail. So we have this concept of a flow, which is effectively your data pipeline that you’re building from a design time perspective. And we can look at different properties for a flow, we can look at different stages in the flow. So a stage represents a particular connection or a transformation or manipulation of the data in some way. And so we can look at kind of stage level kind of characteristics and we can actually look at the data as it’s flowing through. We can look at schema changes on the Connectors side, we can look at input and output row level metrics. So there’s tons of different things that we can observe. And then one of the great things about data Bamber I love is that the lineage view, so you can actually see where a problem is happening and kind of drill down into exactly where the problem occurred and really quickly triage what’s going on. So I think that the analogy that I love to use is that Databand is like the Google Maps traffic. When you’re looking at Google Maps, you can see real time views of where traffic is super heavy, where there’s construction, where there’s been an accident, where you have to where there’s something you have to pay a toll on. And so it gives us real time insights. Google Maps gives us real time insights into the traffic conditions and data and gives us real time insights into how our data pipelines are performing, how data data pipelines are performing. But one of the best things is obviously an EDL tool doesn’t exist in a silo. So with Databand, we can really kind of get more insight into the end to end flow that a customer is using hooks into orchestration, tools that they might be using. So there’s tons of flexibility with it. We’re super, super excited about it. I think it really takes our our capabilities to the next level. We were very excited to be that the first IBM product that data been integrated with. So super excited about that one.
Ryan: Yeah, I know. It’s awesome. I mean, I mean, one of the things that we really made sure about is like the integrations we have with any version of air flow that you’re using or Spark or DVT or you’re using code driven pipelines like using Scala or Java or Python or whatever, we’re trying to take all of the same capabilities and bring them to this next gen version of data stage as well. So I mean, again, like you said, this goes back to initially your point about meeting people where they’re at. We have a lot of customers that are using air flow only to do almost everything that they that they have or they’re really going to. DB And they don’t have data stage. That’s totally fine. We’re bridging the gap between some of the modern tech that is going on at IBM and also a lot of the current tech that’s going on, especially the open source community and being that observability tool that goes across all those. So I’m excited about it. Obviously we did a webinar on it, so if you’re here, anyone will go watch the webinar webinar. But the last couple of things before we head out here, we talked about a lot like what’s one thing you want somebody to take away with and then how can people connect with you after, after this?
Jessie: Yeah. So I think one thing to take away obviously on the observability side, right, I think going back to some teams might not even know what to ask for. I think that the important thing to walk away from that is, are there things that your your data teams are struggling with today, other areas where they’re spending a lot of manual time and effort? If so, you probably need an observability platform, right? It’s one of those things that I think oftentimes it’s more like a discovery journey from the engineering side where they have to kind of raise their hand and say, hey, we need help with this. So if you’re sitting in on the side of the business, if you are managing a data engineering team, it’s good to check in with the team and see what can we do to make their lives easier because the return on investment is enormous. So definitely on the observability side, I would say that from a data fabric perspective in general, I think it goes back to the what we had talked about at the beginning, where you don’t let the concept of a data fabric architecture scare you. It’s not something that needs to happen overnight. It’s not something that you need to kind of throw out your whole infrastructure for and start fresh, right. You just want to kind of take the one one use case that you want to start with and get started building small. And then you can expand later and bring in other parts of the fabric as it makes sense, but you don’t have to bring in every single piece. So those are that I guess I gave you two pieces of advice, but I think those are two very important concepts. And then in terms of of finding me, I am on LinkedIn, I’m on Twitter, definitely reach out. I’m in the Boston area. So if anyone’s local to Boston always willing to meet up and chat about your your data needs. Or just care about problems that you’re facing right in the data engineering space. But yeah, would love to connect with anyone who wants to reach out.
Ryan: Well, Jessie, thanks for coming on the podcast. Really appreciate you being the first person.
Jessie: Yeah, thanks, Ryan.
Ryan: And you know what? It’s going to be hard to top. You know, we may not have any other people from IBM be on this podcast who.
Jessie: Are really super happy. There you go.
Ryan: Well, hopefully you’ll see Jessie as some conferences next year and connect with her on LinkedIn, follow her on Twitter. Thanks, everyone, for listening. And Jessie, we’ll talk soon.
Jessie: Thanks Ryan.