Interview with Best Published Paper Award Winner


Theodore (Teddy) DeWitt (TD), University of Massachusetts, Boston, interviewing Paul Leonardi (PL), UCSB (University of California, Santa Barbara), Winner of the Best Published Paper Award for “Social media and the development of shared cognition: The roles of network expansion, content integration, and triggered recalling,” (Abridged to Fit Interview Format)

 

TD: Congratulations on winning the OMT Best published paper award. Can you briefly highlight what your paper is about?

PL: The paper is based on the idea that the more diverse [the] set of colleagues that we have in our organization and the larger the organization is the less that we know about what anybody actually does. And that low level of, sort of … micro-level knowledge and detail about what other people do is really important to helping teams work effectively for knowledge transfer, for organizational performance and lots of areas. And lots of research has shown that we need to know what people do in order to use them effectively as resources for us. And with the advent of social tools in organizations, I've been curious for some time about whether the affordances of these tools might provide an opportunity to help people develop better meta-knowledge – knowledge about who knows what and who knows whom across the organization. And the reason I thought this might be the case is that it appears that the social tools help to enhance the visibility of people's communication and interaction. And what I mean by that is that, you know, we communicate very frequently through mediated platforms. And most times, though, that communication is invisible, invisible to everybody but the people that are involved in it. So if I send you an email, nobody knows that I sent you an email unless I purposefully put them in the CC line. People don't know about the contents of this Zoom meeting that we're having at the moment, because it's not available, it's not visible to people outside of this dyadic interaction that we're having.

But social media tools provide this kind of open platform that invites third parties to be voyeurs on dyadic encounters. And because you can see what other people are doing and saying, my thought was that perhaps that communication and that interaction is rich enough for third parties to make inferences about what's going on. And so based on that kind of insight, and as I was thinking about the existing literature, one of the things that the existing literature shows is that it's important that, for coordinated action, that people in teams and groups have … [a] more or less similar sense of where knowledge lies and where expertise is.

So, what I wondered was, if these tools are making communication visible for third parties, then the fact that everybody in our group or our department or unit – whatever the level of analysis is – is on these tools means that there's a common base of knowledge that we're all drawing from to make our inferences.

There's a possibility for enhanced shared cognition that we all have a more – not necessarily more accurate, that's a different question – but we at least have a shared sense of who are the right kinds of people that you should approach for certain problems and who aren't, who's expert and who isn't in various topics.

And so that's what I really set out to explore, you know, was number one, does that actually happen, … do social tools by virtue of this communication, visibility, help to build more shared cognition in the social group? And second, if [that is the case], what are the mechanisms at play that are making that possible? And that really was the whole motivation [of] the paper.

TD: So It seems like a rare opportunity to come across an organization that is trying out, sort of … an internal social network, because I think the landscape is littered with companies that have tried that and it has not quite worked out. [So] I was wondering how you got access to this particular opportunity.

PL: It's a great question. So I taught an executive ed class about managing innovation. One of the things that we covered in that class was about the potential use of social tools for knowledge-sharing. And at the end of the couple of days of this exec ed program, a woman who was in marketing at this company came up to me and said, “we are planning on rolling out a social tool pretty soon, and I've been working with the director of internal communications for the company, and I don't think that we've thought through a lot of the things that we've been talking about in this exec ed session. Would you be interested in having a coffee with him?” And I said, sure. So she set up a conversation and he told me, “we're looking to roll this out, this enterprise social-networking tool. I had some success doing this at a prior company that I worked for. Leadership is bought in, more or less, but we're really concerned that it's not going to be successful, because [with] so many of these things …, people try to implement them, and they don't use them. What strategies do you have for helping us to make it more successful?”

And I said, well, I think the first thing that you need is to figure out a really good set of outcome measures that are important for the organization that this tool will help employees to achieve. [H]ave you done that? And they said no, [we're just] going to implement it. And I said, okay, well that's point number one, and point number two is I always find that … employees, especially in new technology implementations – which I've studied a lot in my career – are always skeptical when you say, “we've heard of some other company that has generated these kinds of positives.”

So I said, I think another way to handle this is to really to do a pilot study before you launch. Think of it like beta testing – not the technology but beta testing the outcome variables … and your ability to achieve the outcome variables. [I]f you can demonstrate that the tool is effective at achieving some of these variables that we care about, improving these behaviors, then you've got a really good rhetorical frame to use to be able to help disseminate across the rest of the organization.

And so he said those are both good ideas. And I said, well, I think I can help you with that, if you're interested. [So] why don't we put together sort of a natural experiment, right? [L]et's talk and figure out what those outcomes are that would be of interest. Let's implement the tool in one group and find another group that's a matched sample and not implement there, so we could show improvement. And if that exists in one of the  groups, then you've got a really good internal case study to use to try to disseminate the tool more broadly across the company. So he was like, yeah, that sounds great. And that's how the project was born.

TD: I preface this next question by saying that I think more people are coming up through graduate school now that are trying to figure out just how to think about … technology and, sort of … take it out of the black box it’s often in, in organizational research. And so [that] prompts the question, “how do you define the term ‘affordances,’ and how do you think about analyzing a technology's affordances for purposes of management research?”

PL: Those are big questions.

TD: Yes, those are big questions, I know (laughter).

PL: I'll try to answer simply. So, I think affordances really arise at … the intersection between the set of agencies that exist in social practice. And what that means is that there are people who are agents, that are trying to do things, and they've got goals that they want to accomplish. There are technologies that provide us capabilities to do certain things and increasingly operate on their own autonomy – yes, I won't say volition, but autonomy, right? They do things; they do computations. The algorithms are working in ways that are producing things that we don't fully understand or anticipate, and there's a mix going on, right? [T]he agency of people and the agency of machines [are] really intertwining. Yes, and it's in that interconnectedness of those things out of which an affordance arises.

So an affordance is not the feature of a tool, right? Affordances [are] what a tool helps people to do, [helps] them accomplish. And that result is the mix of what people are trying to do and what the tools are able to do … what [people’s] frame of reference is about, what features of the tool they might try to use.

All of those things, kind of, mix together in this messy way to produce an affordance. So if you buy that, you can't just look at a list of features and a technology and predict what kind of affordances will arise. Instead, you know, it's this really emergent process by which you have to examine, you know – what are people trying to accomplish and how is that mixing with what the tools can and cannot do, which do have some objective limits to them, right? And that's where I think the materiality of the tools becomes important, because they enable and constrain action in many, many ways, but not infinite numbers of ways. And so you have to study this in this more … emergent fashion because affordances are an enactment; they happen in the context of use. Now they do become routinized, I think, over time, in the sense that to the extent that our goals remain constant and we do an active linking of goals or objectives to features of the technology, it becomes routine to say, “Oh, you're trying to do this, use this tool.”

TD: Right. And given that viewpoint, something I obviously  very much agree with is that understanding of what a technology affords as part of an emergent process about seeing how it interacts with human action, human agency. Did you always conceive [that] this study is going to have to be mixed-methods, basically have both the qualitative and the quantitative component in order to actually be able to say the things that you wanted to say?

PL: Yes, I did. And I think because the twin goals of, well, I wanted to be able to demonstrate – well, not demonstrate, that's the wrong word – I wanted to be able to uncover whether my hypothesis was correct, which is that if the tool was affording people the ability to learn about what they knew and who they knew, and if that tool was affording that based on the same set of interactions for everybody in the social group, that they should be developing more shared cognition. I knew that to answer that research question – or, [really,] to test that hypothesis – I needed to be able to identify, to create, a dependent measure of shared cognition that's multifaceted in the paper. And I needed to be able to have some way of demonstrating the effects of social network site use on that dependent variable. And while I couldn't observe that in an experimental way, I could observe that indirectly in an experimental way by setting up a condition where you had a [baseline] group and then only one group got the tool so that we could infer from that that it was the tool use over this period that led to these changes. So I knew that I needed to use, … that was sort of a deductive approach. I wanted to prove – well, to provide evidence for, right? – a correlation [to] this dependent measure of shared cognition and changing it over time.

But then in order to know how that was happening, I had this idea that visibility in some way did that. But we don't know what are the mechanisms that are making things visible. So, how has the technology created an affordance of visibility? And to understand that, I felt really required a more inductive approach. So I knew it had to be multi-methodological to answer the questions I wanted.

TD: Was there's any sort of difficulty through the review process and the framing of the paper?

PL: I've published, I don't know, maybe five or six papers … over my career so far that are mixed-methods in a single paper. And every research project that I've done has always had a mixed-methods component. And then the challenge in crafting a paper is to figure out, what story do I want to tell for this particular paper and is the story I want to tell one that requires me to use a diverse set of methodologies in the same paper, or do I use … one set of tools and provide an analysis in one paper and another set of tools and analysis in another paper? What I've found throughout my career is that it's often easier to split papers by method of analysis.

And I think it's in large part because the way that you write up a front end for a paper that … tests theory deductively is different, right? The genre is different because you're bringing in lots of prior literature that culminates in specific hypotheses that you will then go and test. Where if you're doing an inductive paper, you're building up a logic or a framework that would, sort of, give you a problem, right? That creates a problematization that you're then trying to go out and explore through some qualitative, inductive data. And it's really difficult. And those entail different ontological, and often epistemological, commitments … and not to mention, like I said, [they’re] different genres. And often they are … different research communities that expect different things. And so a good editor will assemble people that can speak to both pieces of that. Often [reviewers] want conflicting things in the same paper, or they don't buy into the idea that you could do some quantification in a primarily ethnographic study. And so you run into this issue of contrasting worldviews, norms around methodologies. And so it's often, I think, much easier to carve out papers by methodological differences. But for this particular story that I wanted to tell – and this has happened, as I said, five or six times, probably, throughout my career – I really have felt like to make this argument, you need both methods.

I do think that Mary (Tripsas), the senior editor, was helpful to me in pointing towards the kinds of things that I really needed to attend to from each of the reviewers in order to satisfy … the majority of their concerns, and in her view – she's trying to look at the paper, now, as a whole, where they're often looking at it in terms of parts, right? – what's going to make for the most compelling paper? And so I think she did a really nice job, and it was quite helpful in that process, because if you try to attend to everything that the reviewers asked from those very different methodological perspectives, it wouldn't fit in the paper.

TD: Do you see yourself doing more research around shared cognition and technological means, or is this just a one-off, really good opportunity that you were able to take advantage of?

PL: So, the shared cognition piece was never my primary interest in this paper. What my primary interest was has been around this idea of visibility and what creates visibility. And so social tools are not my primary interest in this paper either, right? It's really social tools, in this case the platform – you know, social enterprise, social media – that enabled visibility in this context. And one of the important things that happened with visibility was shared cognition. And so, my larger area of interest is around this idea of visibility, that we engage in different kinds of activities that are more or less visible to others, and as they become increasingly visible [and other aspects of things that we do] become more invisible, how does that affect organizing and how does organizing affect those levels of visibility that we have?

My research really spans management, organization studies, information systems, and communication research. And so I published papers [in] journals, [including] communication journals, also, about this concept of visibility … kind of looking at visibility from different perspectives. And two places I see myself going with this research: One is more contextual, which is about algorithms, because you know, here we've got tools that are more or less directly making things visible. Algorithms indirectly make things visible, right? They select. So rather than you decide what you want to pay attention to, the algorithm tells you what you should be paying attention to. And so, algorithms and the agency, kind of, embedded in algorithms I think are really important in shaping the visibility. And so that's an area that I want to explore more.

The second area, which is more phenomenon-based … is about attention. Because as you know, [as] the volume of  information increases there are more things to which we could attend. But there's got to be some kind of threshold at which we can no longer pay attention, right? And that's where astute tech companies are saying, well, yeah, we get that; that's why we're building these tools that will help direct your attention, like algorithms. But those have their own set of politics. And if we know that what becomes visible ends up mattering for lots of different activities, like shared cognition, now we're no longer responsible for what we see. But we've offshored that, or outsourced that, to a tech company – to tell us what we should see. Imagine all the implications that has for organizations and organizational action.

TD:  This has been so, so cool! Thank you for taking the time. This has been helpful to me and, I am sure, to everyone at OMT.

PL: Well, l Thank you for doing this! It’s been great to meet you, Teddy.