Play the webinar
January 19, 2022
The State of Open Data is a series of surveys and reports analysing the current trends in open research data and is the longest running longitudinal survey and analysis on open data. They are a result of a collaboration between Figshare, Digital Science, Springer Nature and other leading industry and academic representatives.In this webinar, Figshare's CEO and Founder Mark Hahnel will present the key findings from the survey and highlight some of the contributing pieces in the 2021 report.
Please note that the transcript was generated with software and may not be entirely correct.
Thank you for joining us for the sixth State of Open Data Webinar report.
Yeah, very happy to have you today. I think this is the most attendees we've ever had for a state of open data webinar, which is really exciting.
Really glad you're here joining us. My name is Megan Hardeman. I'm the product marketing manager at Figshare, and I'm going to be doing a little bit of administration before I hand over to Mark and Greg. And the first is to say that this webinar's being recorded. And we'll send out the recording to all the registrants shortly after the end of the recording.
If you have any questions or some time at the end for Q and A. So should be a question box, function and webinars. Please feel free to type your questions there at any point throughout the webinar and will answer them at the end. And I'll ask them anonymously, and record your name or not. So, yeah, I'd like to introduce my panel is the founder and CEO of Figshare, and Gregory, who is the research analyst at Springer Nature. Are going to talk today about State of Open Data 2021, so I'll hand over to you, Mark.
Thank you Megan! Hello everybody. Hopefully you can hear me loud and clear. I'm sure Megan will tell me if not and hopefully you can see my screen so it has been a big year for data in general.
For those who don't know, the state of open data has been running for six years.
Figshare itself, where myself and Megan from has been going for 10 years. It was our 10th birthday on Monday.
And so it took us a few years to understand how people were using Figshare as a platform and how people started make data available on different types of platforms and what the roles were to move this space forward.
And back then, 10 years ago, it was all about, um, trying to encourage incentives and make it a low barrier to entry.
I think, as we get into this today, we'll see that the incentives are still a big thing.
And the low barrier to entry is not so much a requirement anymore, as people get more familiar with it.
But, you know, over 20,000 respondents from over 192 countries showing this is a global discussion going on in academia and research.
And this has been able to start to provide us trends over time and start looking at a sustained look at what's happening in this space. So with that, I'll hand over to Greg. So Greg Goodey, Research Analysts, Springer Nature, and he can talk us through some of the setup and thinking behind the survey.
Thanks Mark, everyone can hear me as well.
And I thought just to start with, I'll just give a little bit of an information around why Springer Nature is involved.
So, I mean, we have been participating with it with the survey in particular, for the 46 years, And I think, you know, as to make any movement successfully, I think It's understood that the whole community from, funders, institutions, governments to reach out to themselves, only to make, take concrete steps to achieve growth. And as a business, we are firmly committing ourselves to supporting the open science movement. And we're not perfect. There's still a long way for us to go to be, you know, slowly in support of that, and one of the steps we are taking.
He's working with our partners from Digital Science, to help build better solutions. But also, and most importantly, for this state of open data surveys, trying to understand perceptions and needs for the open data movement.
If you move on to the next slide, thanks, Mark.
And so, how we're involved the last, but myself, in particular. I've been running the survey for the last four years, but as a team, we've been doing it for the 46.
And that means we host it, we design it and host it, and as Mark said, we, we try to be able to track some of the information.
So there are questions that we tend to keep the same year on year, and then there are those that we have changed, as well, to try and investigate new topics as the move to open data matures.
And this year, the survey was translated into three languages other than English, Chinese, Japanese, and German. And that's more just because we tend to see underwhelming response rate from those regions unless they are translated and no other real reason.
Then the survey ran for two months over the summer of 2021.
So between May and August. And we generally distribute the survey. And we've tried to keep it consistent for at least the last 2 to 3 years for a number of channels, but predominantly we see most of our responses come through marketing e-mail lists.
But we do supplement this with social media. In particular in China we say good promotion through WeChat and blog posts. And this year, we've seen, takes a response rate of about 4.5 thousand tasteful usable responses. So we do get larger swamps and that these are the ones which are clean and we feel are reliable to analyze.
I thought just to provide a little bit of context for the results that Mark will discuss in more detail.
I'd give a bit of an overview of who responded to the survey the shift, so I think you can see from the left hand figure there.
Yeah, presentation that the majority of the response do come from the northern hemisphere. We got, the largest response size was from the US, 15%.
We got a good response rate from Europe and Asia and I actually think although significantly smaller, we do have sufficient response from the sort of southern hemisphere consensus on the southern hemisphere to be able to do some nice comparative analysis.
And additionally, we've, we also collected information around the field of interest, and we call relatively good spread of interests there, because as a business, we're slightly biased with biomedical biological sciences. There are slightly higher response, is there.
But yeah, we've got good response from Applied Sciences, humanities and Social sciences, Physical Sciences, et cetera. And then one measure that we try to also consider as well as the sort of career stage of the respondents'. This is not by any means, an accurate measure.
But we do try and infer career stage by publishing history. Say, a late career researcher is someone who has published their first peer reviewed article over 10 years ago, early career within the last five years, and mid-career between that.
You can see that the panelists somewhat biased to a more established career researcher.
Then this year, we actually tried to do something somewhat different that we'd never tried done before. And that was just to try and understand how receptive our panel were to open science in general.
Just try and understand, know, how representative our sample is.
Well, we'll ask what our sample looks like. We did this by asking three questions on a five point scale. Greats disagree.
Around the three statements list on their side is trying to understand that openness to open access articles and openness to sharing data openly and their openness to just sharing or research output openly.
And I think we see, we saw that, actually, most people were at, most majority of people were supportive of open access articles, and least may be in line with sharing everything openly.
And but generally, we then conducted latent class analysis and we have found that, you know, we could probably class 72% of the respondent pool as open science advocates. Say, I don't know how represents this, this is, it's hard without just do in larger and larger surveys. But that script, that's how our panel fail. And I think, yeah, that's kind of all, I've got so much. I'll hand back to you.
Thank you very much, Greg.
I think you said a great line there about, you know, we're not perfect, but we're working away and moving in the right direction and I think that's true of the open data space in general, right and the survey.
And everything that everybody is trying to do, you know, we are trying to move things forward.
We're very grateful to be partnered with Springer Nature, you know, um.
Playing to everybody's strengths and making sure, you know, that Springer Nature has such a huge appeal and brands and name in the space. It's a great way to get more of a conversation going around this.
And it's also OK, not perfect but trying our best is also how I just describe my whirlwind approach to explaining a 40 page document and full survey results in the next 20, 30 minutes. So, do dig a little deeper into the actual state of open data, itself. If you Google, State of Open data 2021, you will find all of this information.
Um, and, I mean, I want to start with the fact that some people who qualify themselves as anti open science, it seems strange to me that we live in a world today where there are people who are against open science. And I think that's something we have to be aware of, as well, thanks to our strengths.
But also, be aware of the game. There are perverse incentives in academia. There are different incentives in different parts of the world, but we've always looked at this as a way to try and better understand.
And then provide our own incentives to help assuage any concerns, or to help motivate people, to share data, because we think it's crucially important in moving academia forward, unresearched forward.
So, um, if I start here, the three big takeaways, I think we have to start with not ignoring the elephant in the room.
That is Covid has had a profound effect on how people do their research and share the data and share their papers, and how the general public interprets said findings and research and data. And so, concerns about misuse of data is the number one concern it's higher than ever before.
You can see on this slide, there's 43% is the highest we get to, but there's a few other ones that are also high up there. We can also look at this in a longitudinal way.
If you look over the last four years, what I was just highlighting there, The urgent need to better understand and treat the virus in 20 20. Brought Unprecedented, collective, and collaborative action, which is fantastic. You just hope that you wouldn't need a situation like this to kick things along.
And so, if we look at the concerns of, that are going up over this four year period, you have things like costs, which, I think, is gradually increase becoming more of a concern, as data gets bigger, as people are trying to understand how to publish their data.
There's obviously this idea that if you need to keep 20 terabytes for the rest of all time, then there's going to be a cost associated with that. And where does that come from?
I think if you're looking at the, also going up is incentives, know, people still wanting more incentive incentives, not receiving appropriate credit, or acknowledgment, that's gone up year on year. So that's, that's a really interesting one That that was something I was talking about 10 years ago, saying we need to provide better incentives. So I think the question now needs to be where those incentives going to be coming from in terms of appropriate credit or acknowledgement. You can get more citations, we can keep harping on about that.
We can keep exposing that to people, but how do we really make change at the credit and acknowledgment level in terms of what's going down? People looking for a repository is going down, confused about where to go. I don't know what repository to use. It's gone 23%, 20%, 17%, 16%. I think that's really great to see as the tools are there, right?
We have this idea of the future is here just unevenly distributed. We have technology. We have experts, we have Research's, but there's still some things that are causing concerns.
Oh, one last thing about this, the, the unsolvable problem licensing. I don't know. I have concerns about licensing my research. They, for sure about copywriting and data licensing. This is one that we just, it's nobody wants to learn about copyright and licensing, I think if the message, but we've got to keep involving it in the way in which we educate researchers about how to share their data.
Um so a third of respondents indicated they were used your own or someone else's openly accessible data more during the pandemic done before and I think this is because we're forced to do things, as I mentioned.
We're forced to work differently.
And so, if you think about this, in terms of how it's working in academia, the idea that you are only going to re-use people's work, or you're going to re-use your own work, happens when you've got no access.
The lab just highlights the idea to just the pace up, just keep pushing forward. And we need to look back and think about reproducibility of research. We really need to be thinking about the quality of the research.
And, as I say, Cov, it is highlighted that we had Natasha ... from RDC in Australia. She is the Associate Director for Data and Services, and she said, hurdles to data sharing in the area of policy and cultural change will fall short if we do not have underpinning research infrastructure. And the experts needed to run the infrastructure.
And I think that's, as we're moving to this idea around fair data, findable, accessible, interoperable and re-usable data, then there needs to be an even playing field.
It's, it's gotta be thinking about everybody working together on this.
So, another takeaway is, there's more familiarity in compliance with the fair data principles, and never before. This is because it's a huge movement globally, right, everyone, sees this as a good acronym to send people towards, to push the space forward.
Um, I think everybody understands that this findable inaccessible is easier, and then interoperability and reusability is harder, because not everybody has appropriate infrastructure or expertise to guide them around, you know, describing data. Well, that's the interoperability. And the reusability is the technology, focusing on interoperability between systems and metadata, schemas, and re-usable ... is really making sure the data is describe as well and discoverable. And I think that is what Natasha is highlighting here, that this needs to be happening globally.
The third key takeaway is, repositories, publishers, and institutional libraries all have a key role to play in helping making data openly available.
When we look to where researchers are looking for advice and who they're relying upon, it's everyone, Right, 35% relying on repositories, 34%, relying on publishers, and 30% are relying on their institutional libraries.
So, this shouldn't be a tournament of trying to win market share.
We should be looking for consistency. We should be looking to share knowledge. We should be looking for common standards.
And having this kind of data, gateway idea of everybody sending people to the appropriate place. And I said, this is, this is not a perfect space.
It's moving in the right direction, but I think we need to keep, um, competition. I altered it for now.
If we look at the takeaways for institutions, there's some fantastic universities out the research organizations who have big budgets doing fantastic things, But, um, 58% of respondents would like greater guidance from their institution on how to comply with their data sharing policies.
I think this is the idea, though, that we would like greater guidance at the point in which we publish our data, the points at which it becomes a problem.
So if you have courses, and it's, we, we can't work in a world where it is just in time, responses for, I need to publish my data, because I'm my publication, tells me, so right, there needs to be.
Researchers have a role to play here, as well.
So the survey itself, these are the three core things that we found. But obviously, you can dig a lot deeper into the different parts of the survey. There's also some really great, really interesting thought pieces from around the world, from different industries, thinking about how they're coping with this. And so if you're interested in the open data space, there may be certain bits that are relevant for your job in particular, or for the problems that you're trying to solve.
Um, so I'm just going to highlight some of the key takeaways from some of them.
There's a section from 40 EU, which, as I says, huge research organization, a leader in Europe in terms of data curation, enhancing metadata quality, and just research data services in general.
So, if you're thinking, how should I start? What should I do when it comes to offering services to researchers, then Jan van der Heul
as a data curator in the space. 4TU you has some really, really key tips on how to do that and how to encourage people to make good metadata available.
At the University of Oxford.
Um, we have this idea of consolidating research data in one way.
So trying to, they're making use of Figshare infrastructure in the humanities to try and normalize research practices are not having an individual use case for every group and every department.
And so I'm just going to let Damon talk for a second.
Hi. I'm Damon Strange, Project manager of the University of Oxford. I'm here to introduce a piece of pulled together for State of Open Data. We've recently launched the service, which looks to support research, is the research data as sustainable as possible, by offering a single basket or area, where research can be managed in one place, rather than historical arrangements, where data silos across many departments with varying degrees to support some localized faculty and developmental IT support, or the external contractors that help maintain databases and data sets.
Terms of sustainability, accessibility, longevity, openness, credentials of data discussed in the same class.
So hopefully you'll find it's an interesting read, … though around 5% of the content being eggs and baskets. Thanks.
And what's great about that from Damon is this idea of no researches are unique and they often feel that their research is unique.
And no, no more. So than at the University of Oxford.
Will you find researchers who are very advanced in their fields and want to have a perfect solution, rather than a solution that just works for everybody?
But this is a journey. It's a step by step process that Damons talking about where they have to start with using vocabularies in the humanities, and it doesn't matter who you are. If you want your research to be impactful long-term, you have to try, and you have to encourage those researchers to do so.
In terms of tips for engaging with your research's, the University of Pretoria has a long history in this. So we've been doing this. As a longitudinal study for six years.
The University of Pretoria didn't know the survey back in October 2009. And as such, came up with a Research Data Management Readiness Toolkit. So if you are looking to do something similar yourself, there's a lot in there on Creative Commons and Persuading researchers. The non commercial does not mean that someone is going to take your data and go and build a bazillion dollars.
Project from it. Or drug from it, and you won't get credit for it, It's all about understanding and a lot, a lot more detail.
So, three of our colleagues, like told through some of these tips and they also talk through the incentives, right?
So, because they've been thinking about it for the last 13 years now, um, they have university awards to incense, device researchers, and there's also the N S T S in South Africa Awards, which are full research Data Management Research data sharing. So, there are awards happening at the thunder, but what's really important that is that the folks within the university nominate their researchers for those awards, because somebody has to win. So, why not? Why not there, folks?
We cross the whole gamut in terms of getting different viewpoints, You know, both Oxford and the University of Pretoria, who I've just mentioned now are using Figshare infrastructure, but we wanna make sure that we're getting a balanced viewpoint.
So we're very lucky to have Sarah Gonzalez, who's a, a data librarian at north-western, and that, she's talking about how she's working on her project. We've got a little video from her that I'll play as well.
So, here's Sarah talking us through some of their work.
Hello, I'm Sarah Gonzalez, author of the contributing piece. Open Source and Open Data Collaboration is key.
My institution, that's our Health Sciences Library at North-western University's, Feinberg School of Medicine.
And others like it around the world, are committed to enabling data sharing, open data, and scientific ability.
We've joined the Open Source Repository Development Project hosted by CERN to upgrade our institutional repository, to a more next generation tool and to bolster and reinforce our commitment to open scientific data sharing.
Our contribution to this report, we emphasize the contributions of the talented, dedicated partners of the open-source community, both in terms of code and expertise on issues related to the Data model, and the day-to-day work of depositing, cataloging, and storing data.
Emphasize standards and best practices from the fields of data and file management.
Enable and get them all of our reference.
This includes things like leveraging DOI's, the data sites standard data model and controlled vocabularies.
We emphasize the ability to leverage ...
as data catalog capable of holding descriptive records about datasets without holding the datasets themselves, which may be needed for adequately sharing biomedical datasets that could potentially contain patient identifiable information.
To our collaborative and supportive relationships established through the open source community, we enable all this work to happen and plan to continue to work together to develop new modules and feature improvements.
Thank you, Sarah. And that's a fascinating read in that section as well because it talks through, you know, opportunities for collaboration, and they're working with folks from Nigeria, Area from Turkey, from Japan. You really as a global thing.
But obviously being based in North America, it also touches on the National Institutes of Health's new policy. That comes out in less than 12 months now, in January 2023, which is, if we fund you, you have to make your data available when you publish your paper. And I think that's something that's really important, is, what are your strengths in your industry?
We know that. I mentioned already the prestige of publishers can help.
If you're trying to get published in our journal, we can make you jump for a few hoops. If you're trying to get funded by us, we can make you jump for a few hoops.
I think the flip side to that is the researchers or institutions And how do we talk to researchers to let them know that there are there is help within their organization?
three help to make them have more impact with their research. And I think it's, it's all about catering the message that way.
Really interesting, too, obviously, we heard from Greg about how the different, uh, areas, the different categories, the different domains that SpringerNature, appeals to, and obviously, life science led. But at the same time, if you normalize that data, it's really interesting to see the changes.
And I think, again, Daniel Kipnis touches on this, the, the impact of Covid.
If you look at the, Re3data, which is the catalog system for all of the repositories where you can go and publish your data, there's 2700 of them, of which 1500 during the life sciences, so the majority are in the life sciences. And there's already 68 in Covid.
Um, and this idea, the researches all different in different domains, is definitely true. And I think it's, it really touches on that idea.
What are you in research for, can have an impact on how you pitch to those certain researchers.
So, there is a higher percentage in the Life Sciences, who believe that data sharing should be a requirement. Then the rest of the survey itself.
And you can, you know, you can hypothesize on why that is.
But I think a lot of people understand that, you know, I Tweeted out today, if you, if, if, if we had data availability upon request for the fact, the everyday covert results, I don't think the general public would stand for that. Right? Or upon reasonable request. So why do we support that When we're making healthcare decisions?
It's a, it's a lot more urgent when you think about it that way and I don't think that that reasonable request line should be allowed in any publication.
Speaking of publishes, we find the key takeaways for publishers, 47% said they would be motivated if there was a journal publisher, requirement to do so.
I think we can frame that question in a way that asks, you know, you would, if you wanted to publish here, would you not publish, if you have to make your data available?
So, 53% of them surveyed.
Obtain research data collected by other research groups from within a published research article. I think that's really the way to build on top of the research that's gone before.
It's, you know, methods are useful, conclusions are useful, but the data is useful to, it's also very efficient in terms of not having to redo experiments, have been done before, over and over and over again, to get to the same starting point.
And 53% of respondents said it was extremely important, the data available from a publicly available repository. So we're getting strong vives across the whole of the publisher space.
We're lucky enough to have, um, Graham from Springer Nature. He's the Research Data Management.
Obviously, Springer Nature have been working and leading the way in the data curation space for a long time from the publisher perspective, plows have been pushing on a very you know, they had a mandate for a long time now and I think ramping up their efforts again.
So it's really great to see the real leaders in the publication space taking that if you think about publishing research, this idea of curating data as a service, it's my own personal opinion. A lot of people think of it as akin to peer review, it's not peer review. You're not checking for the novelty of the data you're checking.
Is the data there, and is it well described for the field, right? So you need some subject specific expertise, but you don't necessarily need to be doing peer review in the same anonymized.
Separate to just the core service of publishing the paper. So, here's Graham.
I'm Graham Smith, I'm a research manager at Springer Nature and my piece for the State of Open Data Report is looking at the role of publishers can play in supporting data quality.
Lot of the context in this year's report about the spotlight shown on data by the Covid-19 pandemic, something that we've seen for reasons of mainstream media calls say, OK, show me the data or underlying this particular client.
But also Spotlight shown on data quality.
OK, so how trustworthy, all these data that, we're seeing publishes play a key role in terms of trustworthiness, in terms of reliability, quality, of published research and increasingly, This includes data as one of its outputs alongside things like research articles.
So a lot of the work that's been done in this area by publishers has been the development policies, the rollouts particular tools and services to support data.
what this really looks at. What roles, specialists, or can play to support me and publication of data that underlies articles, specifically, what the future of that type of support might look like. Particularly thinking about what sorts of roles built into editorial operations.
Thank you, Graham. So if you're a publisher, and you're looking to do stuff in this space, I highly recommend talking to Graham.
He's been working in the space around data curation for a long time, and I think this idea of publishing the paper, but also publishing the data at the same time is a critical point in order to reach researchers, to understand that, where the public interest is, as well to understand that the public wants to see, you know, um, can you backup these claims, so we don't have problems of fake news and false claims and other nonsense.
So, one interesting area is where these all meet these different industries. So just to round out different sectors. The key takeaways for funders in government agencies.
73, you can read faster than I can talk, but 73% of survey respondents strongly or somewhat support the idea of a national mandate for making research data openly available. I think we know this is what drives a lot of stuff. So the mandates have to have teeth.
What is the read? What are you going to do if they don't comply? And we see this. We've seen the effect of, you know, Green Open access, data repositories. Sorry, Green Open Access Paper repositories in the UK as a result of the wrath. I think we'll see more of this coming from the American funding Agencies. We've seen it in South Africa with the one big funding agency saying, you have to make your data available.
If you're representing one of these organizations from a country around the world, please do take a read and have to think about what you should be doing in order to make this a reality.
And when we spoke, we heard from Keisuke Edar, from the Japan Science and Technology Agency, ..., also known as J stage.
So at J at JST, they have J stage data, which is an evident, evidenced based based data platform for Japan's learned society publishing. This is a great combination of the two. So they also work on the publication space. And if you're learned society, you can have your own publication process, and they provide the infrastructure for it. They also had a review. They've had an open up Open Science Expert panel for the cabinet since 2014. And they found that the next thing they needed to do was to add in the data functionality for these journals, so that they could support it in this way.
And 42% of respondents in the survey, state of Open Data survey in Asia believe the funders should withhold funding, from penalize researchers to not sharing their data. So that's encouraging to see, it's encouraging to see that every angle, every domain, every stakeholder is doing something globally, so it's all moving in the right direction.
And I think this idea of how we can ensure that research is done, that is done and published, is the highest quality, is really important.
And this idea of having infrastructure where you can, the researchers can submit it, and experts will check it, And that can be librarians, looking for better, better data, It can be subject specific, metadata experts in subject specific repositories. That is how, and then it's published.
That is how you can ensure that the highest quality data is published. Because that's what we need when we're getting such strong eyes on things. I think the public perception is they only see they only see data when it's all gone wrong.
Um, and we see this time and time again, you know, so we've seen it with Eva ..., this is a great way in which having open data helped a real-world problem.
So when they started looking into the data, because it had to be made openly available, they found two things.
They found either it was falsifies or duplicated, or it was just an accidental thing, right? So percentages, calculated incorrectly. This is this idea of many eyes, makes all bugs shallow.
We can, if everybody looks at it, you can get, find the problems faster. But also, you know, that the selection of patients per test groups was not random. And the numbers unlikely to occur, naturally is a really nice thing around open data.
This idea that humans work, subconsciously, work in patterns, so you can analyze data, and do data forensics on it, and say, Has someone, just fudged the numbers here, have they made it up? We have all these tools.
We just want it to be sorted before it gets to the stage before it makes the mainstream news, I'm sure we all heard about even happening this year. And so, we were very lucky to have Professor ... bhabha over at QUT.
Talk about how open data can validate research and combat scientific misinformation.
And she really echoes this point about, how can we ensure that the research is being done? And published? Is of the highest quality papers? And data that's really the take home message, I think, for everybody, every organization should be asking themselves that.
Fundamentally at our organization, how can we ensure that research is being published at the highest quality?
And so open data has two important, overlapping roles to play in increasing the credibility of research.
It's validating it so that researchers can trust it.
And it's combating scientific misinformation so that wider size society can trust it.
I think in terms of credibility, that is definitely one of the ways forward.
But I also like to think about, we have a huge credibility issue that is drawing attention to this space that we can do more with.
But ringing your background to, you know, 10 years of being in this space with Figshare, what inspired, a lot of the original starting point Figshare, was this idea of the full paradigm of research data intensive scientific discovery.
So not just the computer is helping us with research, but the computer's finding patterns that an implicit in Jim Gray. Doctor Gray's fourth paradigm is the ability and the need to share data and we've started to see that come to fruition. So, 2021, 12 years after that was originally brought out. We see real-world, huge changes in this space, Alpha Fold from DeepMind, looking at openly available data from PCB and unit building on top of it using AI. Using machine learning.
We get a wholesale change in the way the protein structure is thought about.
The research that's going on, takes a monumental leap forward, and this is the thing I think we need to be thinking about at the top level. And so when you're talking to people, you need to be thinking about what story do I need to be telling them? Is it for the good of humanity? Is it for the good have credibility? Or is it good for the good of their own research?
The good of humanity gets a lot easier when we're saying, hey, you'll all be allowed out if your house is a lot quicker. If we have the data openly available, I think Kobe. There's been a fantastic way to do this to illustrate this. I wish I hadn't come to this, but it really highlights that how people can build on top of research. Gone before.
To find treatments to find the amazing speed at which we call the, the vaccine's around the world.
And to finish off, it's, it's this idea of the speed at which any given scientific discipline, any research discipline advances will depend on how well these researchers collaborate with one another.
And if we really want to make, it's academia, more efficient, in terms of saving money.
In terms of moving further, faster, we need to give every researcher the ability to fairly provide them the ability to access to literature, to publish literature, to give them the access to data, and the ability to publish the data.
And so this involves people, tech, and culture in funders, in government, in academic publishes in institutions.
And at the middle of this, is funding for posit trees in libraries and archives, and the librarians and the experts themselves. And that includes subject specific experts at publishers. That includes scholarly engagement folks at universities.
So it's really, uh, hugely encouraging survey this year.
This just highlights how many different mainstream organizations are pushing the space forward.
We are aware of what the problems are, and we have tips and tricks to try and make it, to fix those, or to encourage people to act in a proper way. I think the thing that we don't have yet is evenly distributed funding.
And I think we need to be having that conversation with the funders. We don't have evenly distributed mandates, and we need to be having that conversation with the funders. But the fact that folks like the National Institute of Health are moving forward with this, have working groups to this, the NSF. And then other funders like Bill And Melinda Gates, Wellcome Trust, European Open Science Cloud, UNESCO.
It's all moving in the right direction. So if you are working in the space, do have a read of it. If you're just interested in one part, I can share these slides, and you can check them out. But that's everything from me.
Thank you once more, to Greg and everybody at their team. Megan, I'll hand back to you for any questions or comments.
Great. Thanks, Mark and Greg.
And, yes, please feel free to type any questions in the question box.
And there's one comments so far as post your question, but still really interesting, and they said, Interesting, the output sharing is not in favor, since this is the approach to make grants more competitive.
Jeanette, if you have any thoughts about that, any mark in particular to make output sharing?
Sorry, I can't see it. Could you repeat it?
Yeah, interesting. The output sharing is not in favor, since this is the approach to make grants more competitive?
Right. Yeah. I think, I think grants is another.
Did you're talking about incentives?
We have general movement, I think, what we see as general movements that we think you should be doing this, and we may look favorably on you. If you do that until you have qualified, you will get an extra 10 points, if you can demonstrate how you've made your data available. Then, that's really the incentive kicker that it needs to be. And it's different globally.
We see it, you know, in Australia, they have that, in the UK Ref has talked about it in North America, they don't have anything.
So, in the public sector, they do in the private sector, but not in the public sector.
So I think more of that, definitely needs to be happening. I don't want to point fingers are the funders, because they do an amazing job, and they're moving the space forward. And they have to deal with a lot of different areas.
So they are moving at a pace that is aggressive for funders.
But that doesn't mean we can't ask them to speed up anyway.
I think maybe just the other thing, just to add on that, in terms of the data aspect, is that, although the level of support for all outputs being shed was less than just force, publications, is still over three quarters, are in support of that from the survey. So it's still a high proportion of people who are relatively in support of those out that's being shared. It's just not as at the scale of publications.
Thank you both.
It's a question. Do you think respondents are more interested in penalties for others for not sharing then it referring to their own practice?
Do as I say, not as I do. It's a classic. It also works the other way, though. Because this, I think, I think a lot of people feel that they can't share their data. Because, why do I have to share my data, if all of my peers don't? So I think that's why people want it, to happen at a national level. Because if I have to share my data, but my peers don't, they have an advantage against me in a system that encourages.
All right, if you want to get the next grand, if you want to get that professorship, you have to be better than everybody.
Perceived to be better than everybody else. So, why would you, you, or your peers, your competition, why would you want to give them an advantage? So, that's why it has to come from the top.
But, yes, 100%.
Oh, there's a question for you, Mike. Great presentation and report. Does this leave you hopeful for the near future of open data? Also sort of regarding the global crisis beyond Covid?
I think so, Jenny, Barbara makes a great point about this in her comments as well, is, you know, we didn't need Covid come along to highlight, you know, the big problems that need solving.
You know, we already have climate change. That's that's a big enough one that we could all be working on. And we've seen the problems with climate change papers.
We've seen the problems with, you know, autism vaccines, cause autism, retracted paper, debunked paper, badge paper, and the effect that has had this probably no added to where we are with, with anti vax is in Novak Djokovic, right?
So I think there's a lot of things.
To be optimistic about, what I said about the idea that 10 years is not a long time.
Be thinking about 10 years because Figshare, But 10 years is not a long time, if you compare it to, you know, 1980’s was when we first got, you know, let's put all the papers in the world in FTP servers.
And we're 40 years since then, so it's only been 10 years of, of concentrated, let's move the data space forward.
There's people who've been working in that space for 50 years. I'm just saying, everybody pulling together kind of rank, no policies, data mandate only came in in 2014, 2015.
So I'm really optimistic because there's such a groundswell of if we can treat the data well. We see the other side now.
We have to use cases from folks like deep mind that if you can get the data and you curated well, and you use standards, then you can achieve great things and change the face of research. The question then becomes, how do you do that for everything?
And that's a lot harder, more heterogeneous data, how do you make homogeneous datasets, geographically distributed, heterogeneous data?
I think that's why we go in, think sportspeople, enable Solve it.
There's a question about any thoughts, whether the survey respondents may be tilted towards those already seeking and using open data and whether attitudes of the larger community may not be favorable for open science.
I think this may have been touched on at the beginning of the the webinar. But I don't know if you have anything else to add.
I mean, how you feel about it, Mark. I mean, I think I said, you know, it's hard to really gage what the broader population fail, unless you just do bigger and larger surveys.
I know that we when you look at things by the different geographies.
In there, there were there were some differences in terms of how that distribution in terms of Advocates, a agnostic, and anti that. That did have a slight variants on the distribution. So is there is going to be unnatural difference?
So yeah, I mean, I don't know is the honest answer, I can only present in the data. That way that we get given and likelihood is there is a slight bias there.
Just because, you know, how many of us respond to surveys, the things that we're not particularly interested in it? It's one of those things with market research in general.
I still think that's probably because of the scale of the response for positivity, I think is that, you know, it's, it's likely that the majority still do, are in preference of that move to Open Science.
I also think there's a little bit there around, you know, if you ask people, do you think in what society properly perceives is a good thing, is a good thing then you often get the answer, Yeah, but will they act that way in, in. In reality, maybe you know, the good thing is though, that we see it.
If you look at the numbers of datasets that are being published and you can go to Google dataset search, any other Google dataset search data site, the places like that. You see that the actual volume of data that's coming out is huge. The amount of citations from papers to repository light Figshare talking about the Figshare data itself is exponential curves, right. It's like this.
So I think you can say that people might be paying lip service on some things, but you can also look at the core data and see that it is growing exponentially.
And maybe just one more thing to add to it as well is that some people may not be classed as advocates because they're actually held back from doing those things. It's not so much that they're not in support of it, but they just don't have the infrastructure in place to be saying, yes, this is something that I can get behind. So actually, you know, some of those things that Mark was talking about could actually change those numbers, as well, and towards the positive, as well. So it might be that they're not necessarily advocates at the moment, but they could be if they were given the proper structure.
It's a question here about data citation practice. So that continues to be poor, or inconsistent data. Citation practice. Makes it difficult to track the use and impact of our data. Just wondering if you have thoughts on what we can all be given to boost data citation practice, Publishers, repositories beyond?
It's a difficult one.
The, one of the things I think there is, is, one of the things I can say is, there is some, one of the big funders is funding generalist repositories to get on the same page, basically.
So they are funding grants.
It's not being announced yet, but there are a group of repositories that will be encouraged to work together on standardization of things. So they are putting their money where their mouth is. These things will not happen in silos and older.
Repository's had a chance to get involved. So, I think there's positive, happening stuff happening there.
I do think the, the open site citations work that's being done is really good to unleash this information that was previously trapped.
Um, the, the, the problem needs to be that it needs to be operating on a level that is consistent for all, but then, if somebody has a advantage, they can, they can control it. The thing I'm talking about here is if.
If you have an open set of data that everybody can query for citation counts, that's fantastic, but if someone has a bigger set of data that gives you higher citation counts, then it's very hard to stop that group from using that data, because the researchers will see that this highest citation counts.
So, it is a, a hot topic.
I think, but I think it's completely resolvable. And I think things like the collaborations between repositories can help inform other areas as well. It's not just the repositories that conform publishes.
And I think a lot of robots can come in and help us, you know, if you're checking to see is this a valid DOI on a spring in Nature paper.
I'm sure somebody's pitched that idea to Springer Nature in the past.
If it doesn't exist already. I don't know.
You haven't seen anything else, But thank you.
Put your contact details at the end of that slide, Mark.
And but, yep, Please feel free to get in touch. They've got any questions or anything, It's popped into your head after the webinar. So, we'll send the recording around.
Then, the next couple of days, and the slides around, as well, so you want to read up on anything that was in any of the slides in further detail that you'll be able to. But just a big thank you for attending a big thank you to Mark and Gregg for presenting the webinar this afternoon. And have a great rest of your day, everyone.