Play the webinar
May 11, 2022
Join us for a webinar on how the State of Open Data survey — the annual survey on researchers’ attitudes toward open data and data sharing — can help your institution put the NIH Policy on Data Management and Sharing into practice. In this webinar, Figshare’s Government and Funder Lead, Ana Van Gulick, will go through a few of the key survey results and how you can use these to educate researchers about data sharing best practices and implement data sharing support in compliance with the NIH Policy on Data Management and Sharing. These survey results include:
- Researcher familiarity with and creation of data management plans
- How researchers share their data with the public
- At what point in the research cycle researchers make their data available, if at all
- and more!
Please note that the transcript was generated with software and may not be entirely correct.
Everyone, thanks for joining us today for this webinar. My name is Megan Hardeman, and the Product Marketing Manager at Figshare, and before I hand over to Ana to discuss, using the state of open data survey, to put the NIH policy on data management and sharing into practice, had a few pieces of administration to go through. And the first is that this webinar is being recorded, and we will send around to the recording, and any relevant links on a discusses in an e-mail following the end of the webinar.
And if you have any questions, there is a question section in the control panel for goto Webinar, and there's a Chat section, and you can put your question in either place. And there will be some time at the end where we'll answer them for you.
Think that's everything. So with that, I'll hand over to you Ana.
Great, thanks Megan.
Hi everyone. Good morning. Good afternoon and evening. Thanks for joining us. I'm Ana Van Gulick, I'm the Government & Funder Lead here at Figshare and head of data curation, joining you this morning from sunny day in Seattle. one note of apology that my next door neighbor has decided to do some construction on the siting facing my office window this morning.
So if you hear a little construction noise, sorry about that, hoping it's almost over. So today, we're gonna be talking about our state of open data survey, and some key results from that. And how those come in relation with the new NIH policy and data management, and sharing. And trying to kind of pull together current practices and trends that we're seeing, how those might be affected by that upcoming policy, and how you can support your researchers in their data management and sharing needs for NIH or otherwise.
So please do put your questions in the chat. I will certainly try to save time to get to those at the end.
So I'll begin with the state of open data results. This is a survey that we've been conducting for six years now. And we've had over 21,000 respondents during that time from 192 different countries.
And so we've been asking both, was been asked the same questions, as well as some very variable questions over the years. But we've asked some of the same questions. Every year, the survey, which has given us a nice, sustained look at the state of data over time and the trends that we can see, the shifts towards more open science.
But also the changes in what concerns people have about how to share their data in a way that you know, is rewarded and that they're comfortable with.
So I do encourage you to go find the full State of Open Data Report.
There's a lot of great essays in their guest essays that we've compiled from different experts in the field, and you can dig into the reports, as well as all of the results at the DOI link here or at our State of Open Data website on our knowledge portal.
So, please do go check that out to learn a little bit more, and I'll just be giving the brief, ah, review today.
So in our 2021 survey, which is the results we're talking about, say, 2022, still our most recent survey, we have about 5000 responses total, that we analyzed and the results. These came from all over the world, with the largest representation from Europe and Asia, as well as from North America.
The fields of interests that are respondents' work in were largely in the sciences, So the biomedical and biological sciences, applied sciences, physical sciences, and Earth sciences, but also the social sciences as well, and a number of other fields. And when we look at career stage inferred from the date of their first peer reviewed publication, we found that more than half of people were could be considered late career researchers. But we also had about 30% of respondents that were very early career researchers.
So, it's interesting to look at the results broken down by that absolute as well to see sort of the generational shift inside.
one big trends nose was, of course, the context of this survey was done in, which was during the mid 19 pandemic, which has, of course, huge implications for the research community.
We got huge benefits from scientific research being done at a rapid pace, through genomes being shared in vaccine development, and it also impacted how everyday researchers were working at their institutions. Maybe they weren't able to go in and collect data in the lab during the pandemic when universities were closed. And so, how do they shift their work? And I think one thing that people did was shift to re-using datasets their own or others. How could you use that open data as a starting point to conduct novel analyzes and new research?
So, in our 2021 survey, about a third of respondents indicated that they have re-used their own or someone else's openly accessible data more during the 19 pandemic than they previously did. So, we may be starting to see sort of that open data to world science, to know where.
It's not just individual labs, collecting datasets, but actually, large datasets are being aggregated. Datasets are being re-used for Discovery.
So, I'll focus today on three key takeaways from the results of the survey. The first one of these is actually a little bit surprising, perhaps, And that is, that there's more concern about data sharing than ever.
So, one question we asked in the survey is, what problems concerns, if any, do you have with sharing datasets? And we saw that respondents gave us the answers of concerns about misuse of data as a top concern. But also, they were concerned about not receiving credit, about issues with copyright or data licensing, sensitive data, permission to share costs, things like that.
It's perhaps curious that since open data has been growing steadily over the past decade, that people might be more concerned about sharing data. But again, you might actually think this could tie into the pandemic preference became much more common in the last couple of years.
Even then before, it's been a very steep growth during Covid 19, especially in the eye of the public, reporters reporting on the results of preprints, and in newspapers, and the public learning about this work that's available, but not peer reviewed yet.
And it may be that kind of quick dissemination of research results. That gives people pause, that their data could be taken out of context, or could be misused.
And just a general hesitancy, or fear about that. That's a that's a speculation on my point on my part. But it does perhaps stem from the culture, what we saw in research in 20 21.
But we can also look at these concerns longitudinally. So here on the right, you'll see these concerns plotted from the survey results 2018, 2019, 24,021.
And so you can see that the misuse of data has actually grown steadily over time growing from 36% to 47, 43% concern.
As well as issues about not receiving appropriate credit or acknowledgment, licensing, seems to have always been a concern. Copyright licensing is always a bit of a challenge for people that don't work in those fields every day.
Always a little hard to interpret what to do.
Cost of data sharing is something that went up quite a lot from 2018, and we'll talk about big datasets as well.
So, some of these are growing. Some are holding steady.
Second key takeaway that we saw on the 2021 survey results is that there is more familiarity than ever before and compliance with the fair data principles.
So in these results, 66% of respondents had heard of the fair data principles that open data should be findable, accessible, interoperable, and re-usable.
And 54% thought that their data was much more, much, very much or somewhat compliant with the fair principles.
So this is a great story for those of us that have been working in the field for awhile, trying to use the fair principles to do outreach and training to researchers, to emphasize the importance of data, not just being open but being fair, being well documented, discoverable, re-usable. So the fact that these principles, I think, you know, a few years ago, this number was about 30% had heard of it. So the fact that this has grown over time is a success story for open data and data support.
Then the third key takeaway was, who do researchers turn to for support? So, when we asked researchers, who would you turn to to help make your data openly available, where, where would you turn to for support?
We had about an equal split between repository's, publishers and institutional libraries.
Um, so this is probably great news. For those of us who are doing outreach.
We don't need to compete over who can help researchers more, but shows that all of us are stakeholders in this community, in this community, and have an important role to play in supporting data Sharing, whether you're a publisher or a repository at an academic institution or an academic library.
I think that's particularly good news for libraries who have been working really hard over the last decade to build up data management and data sharing expertise, as well as resources.
So to see that researchers are turning there, is good news, and hopefully they'll can, that will continue to grow.
So here's a few key takeaways for a couple of these segments.
So for institutions, 30% said they would rely upon their institutional library for help, almost half said that, they share their research data in an institutional repository. So that may be a bias in our survey sample, because as someone who worked at an academic library, previously, half of respondents as high. But that's really great to see that. People are recognizing institutional repositories as a valuable resource.
And 58% of respondents would like greater guidance from their institution in how to comply with data sharing policies. So that's key for the new NIH Data Management and sharing policy is that researchers are looking for guidance or looking for support.
They want institutional support in terms of writing, data management, sharing plans, as well as in actually managing and sharing that data in the end. So having that guidance on hands, I know, will really help them out.
A few key takeaways from publishers. I don't know if we have publishers' joining us today.
But there's certainly a key part of this broader research ecosystem. And that's reinforced by the fact that 47% of survey respondents said they would be motivated to share their data. If there was a journal or publisher mandate to do so. Those mandates certainly hold great weight in the research community.
This peer reviewed journal publications still holding being the gold standard for academic research and for hiring and promotion standards. So, those, those mandates really push progress in the open data field.
Um, also interesting to see that about more than half of respondents said they had obtained research data collected by other research groups, from within a published research article, which can sometimes be challenging to do to get that open data and actually re-use it. So, interesting to see that people are trying to do that, and maybe as they try to re-use other people's data, how they share their own data, will improve to some more reasonable standards.
And then similarly, and another half of respondents felt that it was extremely important that data are available from a publicly available repository.
So, this reinforces the points that publishers and all of us in the community should support researchers in sharing data in a trusted repository that has that meets repository's standards community standards for identifiers, discoverability and metadata, things like that. Researchers do see a difference there between data shared as supplemental files or tables. Or shared through. Links are available upon request, right? That are not truly very accessible or re-usable. In fact, the dataset may not even be there when they inquire about it. So having that dataset in a repository where you know it is available, does seem important to them.
So for publishers, I mean, I think we've seen over the last 10 years, 15 and 20 years, even.
A huge increase in the publisher data sharing requirements, and also now, I think, not just that those requirements exist but that they're actually being checked more as well. That a data availability statement doesn't need to simply exist. But there needs to be a DOI pointing to a data repository, and that DII needs to be live.
And when a copy editor goes there, there needs to be dataset at the end of that DOI. So that's really great news to help move the needle on data sharing adoption.
OK, and the last key takeaways are four, funders and government agencies that are supporting the research work.
So, here, interesting, about half of our respondents said, Funders should make the sharing of research data, part of the requirements for rewarding grants.
And, almost as many said, that funding should actually be withheld or some other penalties should be incurred if researchers don't share this data.
So, they want this mandate to really be in effect. I haven't, I haven't carry carry water.
Not just in, in paper, but to actually be checked on and have people comply with that.
Even in the survey, even more people felt that even a national mandate for making research data openly available would be a good idea.
So, really strong support for funders encouraging the sharing of research results, and research data from their funded work. This is, you know, in many cases, publicly funded work.
And I think there may be a strong feeling that the public needs have access to it, and that the data is really valuable, and needs to be re-use more than once, to get that full value out of it.
And then, of course, lastly, if we come back to who's, who can we get guidance from?
Researchers say they will turn to their funders for guidance on how to comply with their policies. So I know this is something NIH is now working on with their policy, in terms of rolling out additional FAQs and guides and examples, and that will be really important for researchers, and for all of us supporting NIH funded researchers, because researchers will turn to you for that help.
Um, so, you know, again, just to say that data sharing, funding, data sharing policies have really grown in the last decade, since the twenties 13, OSTP memo about data sharing for research and development from federal agencies. We've seen more and more federal agencies, as well as private research funders, like the Gates Foundation, HAI Wellcome Trust, mandating that people have data sharing plans in place and that they report on where that data is shared.
And so this isn't growing interests for all of us, and I can look at this timeline here, Going back to the previous NIH data sharing policy, for very large, Awards' goes back to 2003.
Then we have the NSF Data Management Plan requirements coming in 2011 requiring data management Plans for all on us, ISF proposals, and I do think that we'll probably see that continuing to expand and grow that requirements in coming years.
The OSTP memo expanding Public Access to research results.
And then we saw some changes from NIH in, in the past 10 years that were specific to genomic data sharing, and clinical trial information. And now, beginning in January, we'll have the new data management and sharing policy.
So here are the notices and the awards.
I'm sure many of you have reviewed these in depth already, maybe even were involved in providing comments back in 20 20, 120 19 on the policy. But we are now reaching the end of that long awaited rollouts. And this policy will go into effect in January of 2023, just about eight months from now. So soon, it will be top of mind for researchers submitting new NIH proposals as of January 25th.
And so, these are the notices.
And I wanted to point out, for those of you who may not have caught the news, a new NIH website that they launched a few weeks a month ago, I'm sharing that NIH.gov.
And this is a really great resource they've put together for all of their scientific data sharing policies. So, you can find their new NIH Data Management Sharing plan, as well as those for more specific types of research here, as well as guidance. And I believe this, this website will be added to overtime.
So, to briefly run through sort of the highlights of this policy.
I don't speak for NIH in this context, so please do make sure that you contact Program Officers, or early on at NIH, for really, how this will be impacted. But, from my perspective and reading of this, here's some highlights of the policy.
So, the policy will require that all NIH funded research that generate scientific data will require a data management and sharing plan to be submitted, and to be evaluated on an ongoing basis to be determined by the institutes, how that will work.
And so this will apply to extramural grants, those going to academic institutions, as well as to contracts, and to intramural research. So it's really quite broad. For any, any NIH funded research generating research data.
And this data is described as the recorded actual material commonly accepted in the scientific community as a sufficient quality to validate and replicate research findings.
It doesn't necessarily mean every output generated during the research process.
It's not electronic lab notebooks or, or notes. But it is the findings that you would need to re replicate results, which is certainly very important for, for open science, and for replication. And data must be shared regardless of if it supports a publication or not.
So importantly, it should also cobbler cover null results.
Replications things that may not be published, that data can be very valuable to the scientific community. And should really be thought of as a valuable output on its own independent of the peer reviewed publication.
That certainly, something that's been top of mind on academic Twitter this week, is the value of open data and open science practices on its own.
Certainly, from my perspective, we should value that as a scientific contribution.
So, this data should be shared as soon as possible.
But at least by the time of the publication, or by the end of the award period, whichever comes first.
So, something to bear in mind, when researchers are writing these data management plans, is the timeline of data sharing.
And, while not every, not all data, must be shared, that's an important thing. The mandate doesn't say every single piece of data.
But the data management, data management, and sharing plan should encourage broad data sharing, data sharing, supporting these replications, and null results.
And I'm trying to maximize the value of the open data that's generated, And to maximize that, value, researchers are encouraged to make the data more or fair, to adhere to those fair principles, to share the data in established and trusted repositories that follow community standards. For metadata, persistent identifiers, have appropriate preservation plans in place to make the work discoverable.
They're encouraged to use discipline and methods specific or repositories first, that exist for the type of data, for the methodology, for them, for the research field, for the file type.
These obviously maximize discoverability and re-use, they can have very specific metadata.
So that's important, But the plan, the policy, also suggests trusted, generalist and institutional repositories, as in any types of data will not have a discipline specific repository that is appropriate.
So, again, turning back to the sharing dot NIH.gov site, here's some additional guidance they provide about planning and budgeting for data management and sharing. These data management and sharing costs are allowable direct costs in, in the award.
So researchers should be encouraged to plan ahead for any costs associated with these processes, whether that's curation of the data, you know, staff time to, to build public databases.
Certainly, data management and sharing is not without a labor component, right?
one needs to dedicate time to managing this process, doing that documentation, and maybe even bringing in experts from across the institution to do that. And then data sharing as well may similarly have costs, or long term data storage, especially for very large datasets, or for curation or review, in specific repositories.
So those are allowable costs.
Program Officers at specific institutes will be able to speak to, you know, how they'll be reviewing those, the data management, plan, data sharing plan, Sorry, it's a mouthful.
DMS E will be, I understand, available to peer reviewers, to review, especially the budgets, but will not be scored by them. My understanding is that review and scoring will be done by the program officer.
So, that will be the best bet for researchers, looking to have very specific questions answered for their field, because there may be institute specific differences and discipline specific differences in what they're looking to see in these plans.
Whether there's community standards, and that should be adhered to an important note for supporting your researchers there, if you go to sharing scientific data, they do have a page on selecting a data repository.
And this is where I want to talk about the data repository ecosystem, and specifically, those domain specific repositories, which NIH has put together, a nice list here. They also point out to the ...
data database to help researchers find appropriate repositories, as well as here, listing those, that are supported by NIH, which is many, including those for, say, genomic data sharing, which should still be prioritized as the discipline specific repository.
But then, also pointing to generalist Repositories. They can't point to every institutional repository, but hopefully researchers are aware of them, but also saying we realize there are gaps in the discipline specific repository space. And there's many other valuable research outputs to be shared. And generalists repositories are trusted repository sources for those. So those include dryad Figshare, open science frameworks and no doubt.
And researchers can use those.
And so thinking about the generalist repositories in this space, here's a graph of citations, of data appearing, citations of data in generalists repositories.
So this is citations, um, pulled from the dimension database, looking for the dois of those specific generalists repositories over the past 10 years. And you can see a really, you know, continuous growth of these, ... is kind of running away with it in the last three years, interestingly.
In terms of the number of citations, maybe they have more software, which tends to be cited more than other research outputs. But overall for all of these repositories, Figshare coming, second, they're in the GREI recently.
You can, you can see that researchers are turning to these flexible repositories and that research in these repositories is being cited in the scholarly literature where that's a primary citation or a citation of re-use is an interesting question for us to continue looking at down the road, but they're a valuable part of that landscape.
Another thing NIH has outlined in there, DNS key policy, NSP Um, Yeah. is some desirable repository characteristics.
These are, I, my understanding, largely taken from the White House OSTP repository characteristics. But they also include some considerations specifically for human subjects data that could be important in certain situations.
But they focused on open access, persistent identifiers, metadata, having re-use, that can be measured, security and provenance recorded, ability to restrict access when necessary. So, we do have a guide at our Help page about how Figshare meets these desirable characteristics, and it won't be for every type of data.
Say, for some of the very, um, restricted data set, data types, there may be a better option. But largely, we're in compliance with almost all of these characteristics. So researchers and those supporting them can find that on our help page.
And we've been fortunate to work with NIH for a few years now on data sharing initiatives and for generalists repositories in this space of supporting NIH funded data sharing. So we conducted a pilot repository with 2019 to 20 20. The repository is still available at NIH.figshare.com. All of the data still so publicly available.
You can go find it there today.
And this was to look at the need for a generalist repository among the NIH funded researchers. And what we saw was there really wasn't needed for this.
Researchers had a lot of different datasets to share.
We also found an impact of having someone review the metadata, to try to make the metadata of these datasets, high quality, complete, to make sure there was a, a meaningful title that funding sources were really linked to, that associated publications were linked out to.
That people knew where to find more resources and contexts about the work was given.
So, something was not simply title dataset dot excel.
Data set one, or mouse data, as, as people will support commonly, do, if, if they're not aware of these best practices. So, having human support, and human review of the datasets, we saw a big impact with this project.
And now we're pleased to be working with NIH, together with five other generalists repository's once again.
So this is a project that was announced at the end of January and kicked off this year.
So it's bringing, it's called the GREI, the Generalist Repository Ecosystem Initiative being run by the NIH Office of Data Science Strategy.
And it's bringing together Figshare together with Dryad, Dataverse, Mendeley data, Open Science Framework and ...
to work on growing the generalist repository landscape to support NIH-funded research data.
How can we have standards that are common and interoperable between these repositories for discoverability, for indexing, for metrics of re-use? How can we also have differences among the repositories that best meet the needs of different NIH funded use cases?
And how can we support together? And I'm trying to data sharing and reporting, on data sharing, importantly, as well, very importantly, for NIH, for half.
So this is where it's really a community effort to get to data, open data, to, know, we've seen that growth in data the last 10 years.
I think the next 10 years looks like a big growth in data that is not just open, but that is FAIR.
So that's where the academic research institutions are really key part of this ecosystem, as well as research communities, and then the funders themselves, the infrastructure, and the publishers as well.
So we can collaborate on these infrastructure, policies, outreach and training, supporting researchers.
So that brings us to a little bit about Figshare that I'll share today about how big share might support your NIH funded researchers. We just celebrated 10 years of Figshare. We're celebrating actually all year that's birthday. Feels like a big milestone, so that whole trajectory of growth in general, the repository's. We've been, we've been there for it. On Figshare dot com now, there is now more than four million research outputs.
Half a million users, hundreds of terabytes of data stored, and more than 100,000 citations in the scholarly literature to that work.
And we're also really proud to provide repository infrastructure to more than 80 institutions, to run their own research repository's.
So when we think about this repository landscape, we have the discipline specific repositories that was funded by NIH, genbank DB gap. Very specific, really great for re-use of specific types of data. Certainly in genetics, it's had a huge impact. We've seen that.
Encoded 19, then we have the gentlest repositories like those funded by the GREI Initiative and that includes Figshare dot com.
And then we have the institutional repositories which you may have at your institution or you may be thinking about, expanding at your institution to make sure that they are data capable and really supportive of data sharing, use cases or for this type of work.
And so about Figshare dot com, that's a freely available repository.
And we will always continue offering that freely to researchers. They can share datasets up to 20 gigabytes, and files up to 20 gigabytes here. It offers flexibility, meets researcher workflows, adheres to those persistent metadata standards.
And is it a great way for researchers to share their work in a way that's fully open access? So everything on picture dot com is fully accessible to humans, and machines can be downloaded via our API, as well.
And researchers can see the impact of their work. Then they can see openly tracked metrics.
They can create a researcher profile page and you know, get started with data sharing through our, through our free generalist repository option.
We've also recently, last Fall launched a new Figshare repository called Figshare plots.
This repository is at plus dot Figshare dot com and this is designed to support larger datasets. So those over the 20 gigabyte limit. And this is simply because we had a lot of requests from researchers coming into our support team.
I'm saying I really want to use picture but have a really big dataset. Sometimes that's 80 gigabytes, sometimes it's 250 gigabytes, sometimes it's nine terabytes. So, we, you know, data data is growing. And volume computing power is growing. The need for large-scale data, for machine learning and data science in biomedical fields, and really, across all fields of research is growing. So, I think these datasets, these large datasets are going to become more common, and we want to support researchers in all of these use cases. We don't want to turn them away. But it was simply an issue of sustainability for us to offer a free service.
You know, we need to be able to cover our costs.
And once you start hosting large datasets, redundant lead in the cloud for many years, there is a true cost associated with that cloud storage. So, we designed Figshare plus so that we could do this in a sustainable way. And that was through its through a one-time data publishing charge.
And so researchers can see those charges transparently listed on our Figshare Plus website.
Actually on our Knowledge portal, knowledge.figshare.com/plus (knowledge dot Figshare dot com slash plus).
And build those into their data management and sharing plan. So you can see, see those costs and plan for them ahead of time.
Write them into grants, whether it's 1 1 terabyte or 10 terabytes, Either is fine. The limit on files, here is going to be five terabytes per individual file, which is just an AWS limits.
And the other thing we're doing with Figshare plus is offering a little bit more metadata and a little bit more customer support.
So taking the lessons we learned from the NIH Figshare pilot that one-on-one support during data deposit really helps make the data more fair and that reviewing the metadata enhances the discoverability of the work quite a lot.
So we are also assisting researchers do it during that data deposit phase.
And hopefully that will, that will make a big impact as well.
It's something we're actually interested in doing it with Figshare dot com as well. Which is simply a scaling issue when you have millions of research outputs. So that's something we're working on with the GREI project is scaling metadata quality.
How can we do that without human curators? Are there ways to to nudge best practices or to automate data curation?
And then the other way that we support, data sharing is through Figshare for Institutions.
So Figshare for institutions is a customizable, standards compliant research repository.
Out of the box, ready to go, but very customizable to make your own to support your organization's research data management needs and to provide open access to any type of research outputs that your researchers have to share.
So here's a few examples.
From Virginia Tech, University, Arizona and from Carnegie Mellon University where I was before joining Figshare and so these are there are Figshare powered institutional repositories that are data capable and you can see that they're customized with their own URL.
For example, KiltHub, CMU dot edu, their own landing page, search ability two, really showcase the research outputs of their institution, and even to showcase the outputs of specific groups, or labs, or departments within the institution as well, can be customized.
So, there's, out of the box, repository infrastructure, meets all of those community best practices, but then lets you customize it for your needs.
So, in terms of research data management, these repositories allow you to control access to the research outputs, and they have a private storage and collaborations side.
So, researchers at your institution can login using your single sign on for user accounts, and then upload files, collaborate, and in what we call projects, to share them before they are published, and can then publish them publicly through your review. So, you can showcase these public research outputs, with a customized repository page. Here's a couple of other examples: one, this is from ...
Jamelia, in the top left from Howard Hughes Medical Research Institution in Virginia. As well as that NIH portal, which is all built on the same infrastructure, or a Figshare plus portal, is also built on that. So we're using it ourselves, actually, now.
It's interesting to be a repository manager on the inside with our own, with our own infrastructure, but these provide open access so data and code can be published openly, so that it's discoverable, sizeable, and most importantly, so that it meets those new mandate.
So, having a resource like a Figshare for institutions repository will allow researchers to recreate that in to NIH data management and sharing plans or NSF data management plans or any other funder requiring public access to research data. They can know about the resources ahead of time work, with your expert staff and putting that into their plans.
And then share the data there, along, along the way, during the project.
Reports on the specific DOI's of each research output so that program officers can find those outputs, and so that they can be included in publications. The DOI’s can be reserved in advance. So, researchers will never miss the opportunity to include the Dataset DOI in the publication. I know. That's a common chicken and egg problem. You're sharing your publishing the paper, you're sharing the data. How do I get the two DOI's into each? You can reserve the dataset DOI, put it in the paper, published the paper, and then update the dataset metadata with the Publication DOI index.
Oh, ..., our outreach as a community, is to make that just the everyday work of research.
So here's a few other features of the picture for institutions, um, functionality, infrastructure.
And so here's a couple of examples, on the left, you can see, it can just, you can share many files or a single file per item. So, an item, being that landing page that has a description, that has a unique, digital object identifier, B, if you'd like, we can use our data site DOI's dataset, ...
dot com, and we can, we're, I think, we're a data site node. So, we can get you a data site DOI's that are unique to your institution. And, those will be minted for each item. Can be reserved and advance there, also version controlled.
Any file type can be shared. So this offers the same flexibility as Figshare dot com, of course. You as the repository managers can put any restrictions in place that you would like to could encourage, you know, preservation friendly file formats and things like that.
Which is often something we do when working with researchers, but we recognize that sometimes that flexibility is needed, so any file type can be uploaded up to five terabytes within Browser. Preview.
So this could be data, code, images, video workflows, papers, theses and dissertations. Anything that you want to host in your repository and your researchers need to share.
We offer an open Figshare API for upload or download a files, and as well as an FTP server. So, there's a lot of different ways to get large datasets into the repository.
Custom metadata, so, importantly, we have our Community Standards metadata that goes out to data site, and Google Scholar, and things like that.
But you with a big institutions portal, you can add additional custom metadata. And you can do this metadata item type, or by the group.
So, if you have a specific research community in the arts that wants to have their own metadata fields, you can customize it just for that department.
If you want to add something for clinical trials, for a medical research group, you can add that.
So really, a lot of ways to customize the metadata with this.
And then lastly, we have a few features that help with the fairness of the data. So this is our curation module, which allows for review. So you'll see at the top here, datasets come in. They're submitted, and then they aren't published right away if you turn on this review feature, which is optional.
But many, many organizations find this really helpful for enhancing the fairness of data.
So then you can have experts at your institution or even experts here at Figshare with our Figshare curation services team, review the datasets, and get in touch with the researchers to make any revisions.
Whether that's adding a bit more context. But it's making sure all the authors and funding IDs are listed and related resources. Make sure that metadata is complete and high quality as possible.
Then once it's checked and fair, publishing it in the repository.
Figshare for institutions also provides more restricted access functionality, so unfixed dot com researchers, Consent Embargoes.
But at a Figshare for organizations site, you could restrict access to just logged in users, or just to certain groups within the institution say, within their department, or within college.
Or you could also using our, one of our newer features, which is request access, so for datasets that really can't be shared publicly.
But you want to have a landing page and a DOI for them, those who are interested in getting the data can request access. And that will send a request to the author who shared the data, as well as to a repository administrator.
So, if it's it datasets that has restricted access, maybe a data use agreement is required for it, or IRB approval to access it with human subjects.
This offers that option, which could be quite valuable for the NIH mandate, actually, for sharing some of those more restrictive datasets.
So you can easily manage this repository.
You can manage the user groups with a single sign on integration with your HR feed.
Manage storage for those groups, default storage allocations, administer storage requests for larger datasets.
The storage is yours to allocate how you like up to up to many terabytes per researcher.
Then importantly, you can track the impact of this work.
So tracking impact of work is valuable for the individual researcher, for the funder, and of course, for the institution as well.
So each item on Figshare as publicly available views, downloads, and citation counts. And these citation counts are pulled from the full text of the scholarly literature, which is really important that we're not just searching references fields here, it's going to capture data, availability statements, and in text citations of datasets, which is often where researchers are citing them.
So they can see the impact of that. Also altmetrics scores to capture, other types of attention from Twitter, from bloggers, from the news media, and such, which may come before citation, formally in the scholarly literature. So you can see that quicker impact for some types of work. And so the impact can be tracked in Figshare for institutions At the item level, the researcher level.
The group, say a department or at the whole organization institution level, so you can generate reports and, you know, see, see the usage of data sharing and the impact that that open data is having.
So that's it for me today. Thank you so much for joining us.
I hope this gave you a good glimpse into what the state of open data is, where funder policies, including at NIH, are going and how Figshare might be a good way to support your researchers. And I will stop there, you can please go to these URLs to lead to learn more. You're also welcome to get in touch with me directly, or with Megan, So I'll stop there, and Megan, do we have any questions yet?
Thanks, Ana. Yes, we do have a few questions. Just to note to say, I'll put all of those URLs in the follow-up e-mail as well, so if you missed them.
So the first one is, so, in general, the State of open data results show that a substantial proportion of researchers care about sharing and using available research data, which is awesome. How much of a concern is there for sample bias, ie. that the respondents to the survey aren't representative of researchers writ large?
And I can actually all of a sudden, if you've got any follow ups, cyber storm involved in the state of open data survey and report last year.
And it is possible that there is sort of survey self selection bias and we did find that 72% of the respondents were open science advocates, so that might might lead toward thinking that way.
We do try and and spreads news of the survey as far and wide as possible.
So we worked with Springer Nature who actually organize and analyze the survey results for us. So it's, it's a larger group of people who are trying to promote it. But is there is the possibility of sampling bias. I don't know if you've got anything else you wanted to add on.
Yeah, I would just say that certainly possible. If not likely, that a survey titled Open data would attract those who are already inclined positively towards the practice.
I've conducted some other surveys in my own work research, data management and sharing practices among MRI researchers and psychology researchers. And I think you certainly see that bias when you look at the results.
We asked them to rate their, the maturity of their data management practices, and they would say that their practices were much better than the community as a whole.
So we asked for their perceptions of their research community, and then where they thought they stood in relation to that spectrum. And our respondents almost always thought they were further ahead of their community as a whole.
So they're reporting that as well, that, you know, so that you clearly see some other researchers in their community not being as far into data management and sharing practices as they are set up.
We we try our best.
And I think hopefully it's a shift that will kind of sweep everyone up as the mandates come into play.
Great, thanks, Ana. And in terms of the NIH policy, there's a question. So, data sharing is not mandated or incentivized by preference and awards still only strongly encouraged.
So, I'm not sure, entirely what we're talking about here. It could be, It could be a couple of different things, so.
I think it's going to vary a lot by institute and program officer.
Um, the If you're thinking about, Does previous data sharing matter for getting awards say, Data sharing that you would report in Biosketch, I think that's going to be quite variable about how that is seen. My hope is that it's starting to be seen more and more favorably. Again, you should probably ask this question to someone at NIH. Not, not me, but I'll give you just my thoughts on it.
And so, that's growing. But it's similar to thinking about open data as a valuable contribution in, in promotion and tenure, too, right. There's that shift is, is taking a bit of time.
My sense is that data sharing will be pretty heavily required for new awards after this policy goes into effect.
Not every dataset will need to be shared. Not every piece of data that is collected. But I do think there will, no one could not get away with having an award and not sharing any data at the end of it. I do think that a program officer would say, no, you must have this plan, and this plan must include some data sharing in the FAQ that NIH released.
They have a list of reasons that are not good enough not to share your data.
What some of one of them was, You don't think your data will be useful to anyone else. They were like, No, that's not a reason. Please, share your data. So, I, I think NIH will be enforcing.
This requirements, more. What exactly that enforcement looks like.
I think may still be a work in progress, and worth you, or your researchers working, reaching out to specific program officers up to ask how they'll be doing that.
But researchers should probably be prepared that at least some level of data sharing, certainly not all, it's not a mandate of all data.
Some data sharing must start to be practiced.
Thank you There's a question about whether the slides will be available on site, the webinar recording, would you be happy for us to channels? We can, we can post the slides as well. Hmm, hmm, hmm.
I think there's a question here. Do you have any kind of numbers or testimony about the impact of human reviews on dataset metadata?
It's an interesting question and I think the data's still a little fuzzy at the moment. So there's not something public I can point you to.
I can say that one thing we tried to do during the NIH Figshare pilots was to attempt some sort of apples to apples comparison, which was looking at NIH funded data published in the NIH Figshare repository that had undergone metadata review and enhancements Compared to datasets that we believed to be NIH funded on Figshare dot com and believed to be NIH funded meeting. that someone hadn't written into the funding fields. The words, NIH, or National Institutes of Health, for something like that.
So, but those datasets had not been checked.
So when we tried to compare those two groups of datasets, what did we see?
in terms of differences, the most notable?
Um, difference is simply that there was much more text in those, and I should share, the titles were longer, more characters, and the descriptions field had many more words and characters in it, as well.
Now, longer better, it's a, it's a little bit of a leap, to say that, but having almost no characters in your dataset description is surely not very helpful.
So, you can say, or the extreme, that correlation would hold true.
We then looked at the metrics of these datasets of downloads and views. And we repeated this analysis a couple of times. Although I think probably not since last fall now at this point, and we do see an increase in the views and the downloads of the NIH Figshare checked datasets.
Compare it to Figshare dot com.
So the hope is that citations continue to follow that, you know, there's a citation lag. Obviously, we won't see that in just eight months or a year. Citations will also often follow years later for an open datasets and there could be some compounds there that, you know, we also promoted the datasets in NIH Figshare more by writing case studies about them, or they were higher impact journal articles or something like that.
But I think overall, I would say there is a definite trend that datasets with better metadata have more views and downloads citations as well, and we're gonna keep looking at that data.
Um, as part of our NIH work, and soon, we'll be able to look at the datasets and Figshare plus, as well that are also checked, Of course, the compound being that they're very large datasets, so how many people download a many terabyte dataset is also throws a wrench into looking at those statistics book.
I think something we're all interested to explore.
Thank you. I'll just add to that as well.
In the 2021 State of Open Data Report, there's a contribution from, uh, the data curation team at 4TU in the Netherlands about the sort of processes they go through when they're enhancing their researchers, metadata, and in their repository, which might be of interest to you as well.
And the last question is Figshare, considering developing capability to host sensitive data as part of the GREI Initiative?
Yeah, that's a great question.
And I think it is on our radar, but not in our plans yet.
So, it would be a bit of an infrastructure shift for us. So, we would have to, certainly, play it out for that.
And the question is whether it makes sense for a Figshare to do that, versus partnering with other repositories like ... that are designed to host clinical trials data. But I think we are continuing the kinda the first year of that project. A lot of it is exploring the use cases that exist and the functionality that exists already in that genitals repository space. And then in the out years of that program, will be starting to fill in the gaps.
So I would say, stay tuned.
I know it's something also, our institutions that work with Figshare, maybe, kean, to have added. So, our roadmap is heavily informants by client feedback, both end user on Figshare dot com, and fisher for institutions, clients.
And so, what we hear from them as, as gaps and needs will also inform our development work in that area.
Eager, eager to work, thereabout Yeah.
All right, thank you for all of those great answers. And thank you to everyone who asked a question. If you do have a follow up question or something that pops into your head after the end of the webinar, please do get in touch either with myself. Just firstname.lastname@example.org or on figshare.com And we'll be happy to answer that for you.
And thanks again for coming and have a great rest of your day. Thanks, everyone.