-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.qmd
861 lines (671 loc) · 35.3 KB
/
index.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
---
title: |
{width=2in}
Searching for Elusive Arctic Dataset Citations
subtitle: "Data Fellowship Project 2022"
author:
- name: Althea Marks
orcid: 0000-0002-9370-9128
email: [email protected]
affiliations:
- name: University of California Santa Barbara
department: National Center for Ecological Analysis and Synthesis
address: 1021 Anacapa St
city: Santa Barbara
state: CA
postal-code: 93101
url: https://www.nceas.ucsb.edu/
date: "`r Sys.Date()`"
format:
html:
number-sections: true
toc: true
code-tools: true
theme: cosmo
self-contained: true
title-block-banner: "#B5E1E7"
title-block-banner-color: "#146660"
---
## Purpose
Run ADC DOIs through `scythe` & compare to known DataONE metrics citations. Known ADC citations have mixed origins including DataCite, previous `scythe` runs, and manual additions via the ADC UI.
## Questions
1) Does the addition of the [xDD](https://geodeepdive.org/) digital library to the [Scythe package](https://github.com/DataONEorg/scythe/tree/main) improve the quality and scope of citations in the ADC? Does increasing the number of sources we are searching result in more complete coverage (quality)?
- Overlap in citation among sources
- Species ratification curve inspired - start to get to a point where we can estimate the actual amount of citation out there. Dataset citations are rare enough the technique may not be applicable. Rarefaction.
> The calculation of species richness for a given number of samples is based on the rarefaction curve. The rarefaction curve is a plot of the number of species against the number of samples. This curve is created by randomly re-sampling the pool of N samples several times and then plotting the average number of species found on each sample. Generally, it initially grows rapidly (as the most common species are found) and then slightly flattens (as the rarest species remain to be sampled). [source](https://www.cd-genomics.com/microbioseq/rarefaction-curve-a-measure-of-species-richness-and-diversity.html)
*Would this be sampling the entireity of ADC DOIs?*
2) Does the prevalence of data citations differ among disciplines (enviro vs SS)?
- Use ADC dicipline classifications
- Dataset citations are rare, N of classifications varies widely, need to control for sampling biases <https://zenodo.org/record/4730857#.YoaQ2WDMKrM>
<!-- -->
4) Total number of citations is extremely useful. Ground truth analysis - for a small number of datasets manually search through literature for citations.
5) Do usage metrics (downloads and views) correlate well with citation metrics?
## Methods Overview
- Gather existing/known ADC dataset citations picked up by the automated DataONE metrics API
- Get a list of all ADC dataset DOIs
- Run all ADC dataset DOIs through `scythe` libraries
- Review HTTP errors and rerun
- Calculate citation source overlap
- Compare citations from `scythe` to DataONE metrics
## R Setup
```{r md_setup, include=F}
knitr::opts_chunk$set(echo = TRUE, eval = FALSE, message = FALSE)
```
```{r analysis_setup, eval=T, message=FALSE, warning=FALSE}
#| code-fold: true
# set date here. Used throughout data collection, saving, and analysis. YYYY-MM-DD
#date <- "2022-07-14"
date <- "2022-11-03"
# vector of APIs used in analysis
source_list <- c("scopus", "springer", "plos", "xdd")
# ADC color palette
adc_color <- c("#19B36A", "#B5E1E7", "#1B897E", "#7AFDB1", "#1b897E", "#1D254E")
# load libraries
source(file.path("./R/load_pkgs.R"))
# create directories and file paths
source(file.path("./R/analysis_paths.R"))
# functions for data collection and analysis
source(file.path("./R/functions.R"))
```
## Search For Citations
### Current known ADC citations
Use GET API request body in DataOne Metrics Service production endpoint: <https://logproc-stage-ucsb-1.test.dataone.org/metrics> documentation: <https://app.swaggerhub.com/apis/nenuji/data-metrics/1.0.0.3>
```{r get_request_citations}
#| code-fold: true
{
"metricsPage":{
"total":0,
"start":0,
"count":0
},
"metrics":["citations"],
"filterBy":[{
"filterType":"repository",
"values":["urn:node:ARCTIC"],
"interpretAs":"list"
},
{
"filterType":"month",
"values":["01/01/2012",
"05/24/2022"],
"interpretAs":"range"
}],
"groupBy":["month"]
}
```
Example request:
```{r example_request}
#| code-fold: true
https://logproc-stage-ucsb-1.test.dataone.org/metrics?metricsRequest={%22metricsPage%22:{%22total%22:0,%22start%22:0,%22count%22:0},%22metrics%22:[%22citations%22],%22filterBy%22:[{%22filterType%22:%22repository%22,%22values%22:[%22urn:node:ARCTIC%22],%22interpretAs%22:%22list%22},{%22filterType%22:%22month%22,%22values%22:[%2201/01/2012%22,%2205/24/2022%22],%22interpretAs%22:%22range%22}],%22groupBy%22:[%22month%22]
}
```
```{r adc_citations, cache=TRUE}
#| code-fold: true
# Run ADC API Get call, unnest target_id results to individual columns
dataone_cit <- metrics_citations(to = as.POSIXct(date)) # use analysis date to constrain search
dataone_cit <- tidyr::unnest(dataone_cit,
cols = c(target_id, source_id, source_url,
link_publication_date, origin, title,
publisher, journal, volume, page, year_of_publishing))
write_csv(dataone_cit, file.path(output_directory,
paste0("dataone_metrics_cit_", date,".csv")))
```
### Query SOLR
DataOne metrics API can only provide data package DOIs with citations, and can not provide a comprehensive list of all data package DOIs contained within the ADC. To search through all the repository metadata we query the DataONE search index (Apache SOLR search engine). SOLR is the same underlying mechanism that DataONE uses in the online tool and can create complex logical query conditions.
::: callout-tip
Call `dataone::getQueryEngineDescription(cn, "solr")` to return a complete list of searchable SOLR values
:::
#### Get all ADC DOIs
```{r SOLR_query_doi}
#| code-fold: true
# set coordinating node
cn <- dataone::CNode("PROD")
# point to specific member node
mn <- dataone::getMNode(cn, "urn:node:ARCTIC")
# set up Solr query parameters
queryParamList <- list(q="id:doi*",
fl="id,title,dateUploaded,datasource",
start ="0",
rows = "100000") # set number to definitely exceed actual number
# use `q = "identifier:doi* AND (*:* NOT obsoletedBy:*)"` to only include current versions of data packages
# DataOne aggregates citations across dataset versions
# send query to Solr, return results as dataframe
solr_adc_result <- dataone::query(mn, solrQuery=queryParamList, as="data.frame", parse=FALSE)
write.csv(solr_adc_result, file.path(output_directory,
paste0("solr_adc_", date, ".csv")))
```
#### Get all ADC discipline ontology classifications
The ADC created a research discipline ontology to classify datasets. Here is the [root of ADC discipline semantics annotations](https://bioportal.bioontology.org/ontologies/ADCAD/?p=classes&conceptid=root). Classes/ID is where to look for query specifics. Below is an example SOLR query that looks for two of those disciplines:
https://cn.dataone.org/cn/v2/query/solr/?q=sem_annotation:*ADCAD_00077+OR+sem_annotation:*ADCAD_00005&fl=identifier,formatId,sem_annotation
::: callout-note
Every single SS ID will need to be listed in the query list. Solr currently not set up to query umbrella SS ID yet.
:::
```{r SOLR_query_disc}
#| code-fold: true
#|
# Run second Solr query to pull semantic annotations for 2022_08_10 DOIs
# set up Solr query parameters
discQueryParamList <- list(q = "id:doi* AND (*:* NOT obsoletedBy:*)",
fl = "id,title,dateUploaded,datasource,sem_annotation",
start ="0",
rows = "100000")
# send query to Solr, return results as dataframe. parse = T returns list column, F returns chr value
solr_adc_sem <- dataone::query(mn, solrQuery=discQueryParamList, as="data.frame", parse=T)
# POSSIBLE BREAK POINT - read in url
# read in csv with coded discipline ontology
adc_ont <- read.csv("https://raw.githubusercontent.com/NCEAS/adc-disciplines/main/adc-disciplines.csv") %>%
# use ontology id to build id url - add required amount of 0s to create 5 digit suffix
mutate(an_uri = paste0("https://purl.dataone.org/odo/ADCAD_",
stringr::str_pad(id, 5, "left", pad = "0")))
solr_adc_sem$category <- purrr::map(solr_adc_sem$sem_annotation, function(x){
t <- grep("*ADCAD*", x, value = TRUE)
cats <- c()
for (i in 1:length(t)){
z <- which(adc_ont$an_uri == t[i])
cats[i] <- adc_ont$discipline[z]
}
return(cats)
})
# extract discipline categories from single column to populate new columns
disc_adc_wide <- solr_adc_sem %>%
unnest_wider(category, names_sep ="_") %>%
select(-sem_annotation, -datasource, -title) %>%
rename("dataset_id" = id)
write.csv(disc_adc_wide,
file.path(output_directory, paste0("solr_adc_", date, "_disc.csv")),
row.names = FALSE)
```
*SOLR query does not yet include date search term to align with `date` object. Using `date` to save and read in .csv*
### Run DOIs through `scythe`
```{r All_ADC_DOIs}
#| code-fold: true
# read in saved SOLR results
solr_adc_result_csv <- read_csv(file.path(output_directory,
paste0("solr_adc_", date, ".csv")))
# create vector of all ADC DOIs from solr query `result`
adc_all_dois <- c(solr_adc_result_csv$id)
```
APIs can have request rate limits. These specific rates are often found in the API documentation or the API response headers. If request rate limits are exceeded, API queries will fail.
```{r get_API_rate_limits}
#| code-fold: true
# Scopus request Limits
key_scopus <- scythe::scythe_get_key("scopus")
url <- paste0("https://api.elsevier.com/content/search/scopus?query=ALL:",
"10.18739/A2M32N95V",
paste("&APIKey=", key_scopus, sep = ""))
curlGetHeaders(url)
# [15:17] shows "X-RateLimit-Limit:", "X-RateLimit-Remaining:", and "X-RateLimit-Reset:" (Unix epoch is the number of seconds that have elapsed since January 1, 1970 at midnight UTC time minus the leap seconds)
# Springer request Limits
# 300 calls/min and 5000/day
# not found in response header, received email from springer that I was exceeding their rates above
#key_spring <- scythe::scythe_get_key("springer")
#url_spring <- paste0("http://api.springernature.com/meta/v2/json?q=doi:10.1007/BF00627098&api_key=", key_spring)
#curlGetHeaders(url_spring)
```
Run each library search in parallel in separate background jobs to keep console available to work with. By default `job::job()` imports the global environment into the background job.
::: callout-note
`scythe::scythe_set_key()` is a wrapper for the `key_ring` package. An interactive password prompt is required to access the API keys stored in `key_ring`. This *does not work* within a background job environment; your keyring needs to be temporarily unlocked with `keyring::keyring_unlock("scythe", "your password")` replace `password` in the next code chunk with your actual keyring password.
:::
::: callout-warning
Be careful not to save, commit, or push your personal keyring password.
:::
```{r citation_searches_background_jobs}
#| code-fold: true
# Run each source/library search in a separate background job. Running a for loop will return incomplete results if API query fails, which is better than loosing all progress because of a single error in a single vector call.
key <- "password"
# Set up empty results data.frames
citations_scopus <- data.frame()
citations_springer <- data.frame()
citations_plos <- data.frame()
citations_xdd <- data.frame()
######### Scopus
job::job({
for (i in seq_along(adc_all_dois)) {
# access API keys within background job environment
keyring::keyring_unlock("scythe", key)
# suppress errors and continue loop iteration
result <- tryCatch(citation <- scythe::citation_search(adc_all_dois[i], "scopus"),
error = function(err) {
data.frame("article_id" = NA,
"article_title" = NA,
"dataset_id" = adc_all_dois[i],
"source" = paste0("scopus ", as.character(err)))
}
)
citations_scopus <- rbind(citations_scopus, result)
write.csv(citations_scopus, path_scopus, row.names = F)
}
}, title = paste0("scopus citation search ", Sys.time()))
######### PLOS
job::job({
for (i in seq_along(adc_all_dois)) {
# access API keys within background job environment
keyring::keyring_unlock("scythe", key)
# suppress errors and continue loop iteration
result <- tryCatch(citation <- scythe::citation_search(adc_all_dois[i], "plos"),
error = function(err) {
data.frame("article_id" = NA,
"article_title" = NA,
"dataset_id" = adc_all_dois[i],
"source" = paste0("plos", as.character(err)))
}
)
citations_plos <- rbind(citations_plos, result)
write.csv(citations_plos, path_plos, row.names = F)
}
}, title = paste0("plos citation search ", Sys.time()))
########## XDD
job::job({
for (i in seq_along(adc_all_dois)) {
# access API keys within background job environment
keyring::keyring_unlock("scythe", key)
# suppress errors and continue loop iteration
result <- tryCatch(citation <- scythe::citation_search(adc_all_dois[i], "xdd"),
error = function(err) {
data.frame("article_id" = NA,
"article_title" = NA,
"dataset_id" = adc_all_dois[i],
"source" = paste0("xdd", as.character(err)))
}
)
citations_xdd <- rbind(citations_xdd, result)
write.csv(citations_xdd, path_xdd, row.names = F)
}
}, title = paste0("xdd citation search ", Sys.time()))
########## Springer
# divide ADC corpus into chunks less than Springer's 5,000/day request limit
springer_limit <- 4995
num <- seq_along(adc_all_dois)
chunk_list <- split(adc_all_dois, ceiling(num/springer_limit))
job::job({
for(chunk in seq_along(chunk_list)){
# pause api query for > 24hrs between chunk runs
if(chunk != 1){Sys.sleep(87000)}
for (i in seq_along(chunk_list[[chunk]])){
# access API keys within background job environment
keyring::keyring_unlock("scythe", key)
# suppress errors and continue loop iteration
result <- tryCatch(citation <- scythe::citation_search(chunk_list[[chunk]][i], "springer"),
error = function(err) {
data.frame("article_id" = NA,
"article_title" = NA,
"dataset_id" = chunk_list[[chunk]][i],
"source" = paste0("springer ", as.character(err)))
}
)
citations_springer <- rbind(citations_springer, result)
#write.csv(citations_springer, path_springer, row.names = F)
}
}
}, title = paste0("springer citation search ", Sys.time())
)
```
Springer's API query limits affected how we ran our search. We decided to break the list of ADC DOIs into \< 5,000 DOI chunks and run each chunk through the API with 24hrs in between the last query and starting the next DOI chunk. We could have changed the base `scythe` function `citation_search_springer()` to slow down to accommodate both request limits, but this would substantially slow down the function and make smaller DOIs queries slow and cumbersome.
```{r springer_search_throttled}
#| code-fold: true
######### Springer
# divide ADC corpus into chunks less than Springer's 5,000/day request limit
springer_limit <- 4995
length(adc_all_dois) / springer_limit
chunk_1 <- adc_all_dois[1:springer_limit]
chunk_2 <- adc_all_dois[(springer_limit+1):(springer_limit*2)]
chunk_3 <- adc_all_dois[((springer_limit*2)+1):length(adc_all_dois)]
# change "chunk_x" object to search next chunk of DOIs. Must wait 24 hrs from last request.
doi_chunk = chunk_3
job::job({
for (i in seq_along(doi_chunk)){
# access API keys within background job environment
keyring::keyring_unlock("scythe", key)
# suppress errors and continue loop iteration
result <- tryCatch(citation <- scythe::citation_search(doi_chunk[i], "springer"),
error = function(err) {
data.frame("article_id" = NA,
"article_title" = NA,
"dataset_id" = doi_chunk[i],
"source" = paste0("springer ", as.character(err)))
}
)
citations_springer <- rbind(citations_springer, result)
write.csv(citations_springer, path_springer, row.names = F)
}
}, title = paste0("springer citation search", Sys.time())
)
```
### Dealing with errors
The `tryCatch()` functions in the above search `for` loops records errors produced from any API request or scythe function. The corresponding DOIs are extracted and rerun through `scythe` a second time. When running the DOIs with errors through Scopus we discovered two bugs in the `scythe` code. The first bug was fixed [here](https://github.com/DataONEorg/scythe/commit/59bb1944bd755c2e3cd6258f02025ad1d0515723). The second bug was a query return that did not have a DOI (conference proceedings).
```{r pull_errors_rerun}
#| code-fold: true
# Extract DOIs that error during scythe queries
# read in raw results .csv into a list of dataframes
results_list <- lapply(source_list, FUN = mk_result_list)
# assign source names to list elements
names(results_list) <- source_list
# pull dataframe rows that had API request errors
error_list <- lapply(results_list, FUN = did_this_error)
# run error DOIs back through scythe
error_query_results <- sapply(error_list, FUN = query_errors, source_list)
# write error re-run results to .csv
map2(error_query_results, source_list, write_error_results)
```
::: callout-note
Running a second round of API queries using error DOIs is semi-automated above. Future script users will likely need to adjust the above code chunk to combine 1st and 2nd run results for analysis.
:::
```{r run_failed_query_dois_again}
#| code-fold: true
# This code was used during the '2022-07-08' scythe run.
## Scopus
citations_error_scopus <- data.frame()
job::job({
for (i in seq_along(doi_error_scopus)) {
# access API keys within background job environment
keyring::keyring_unlock("scythe", key)
# suppress errors and continue loop iteration
result <- tryCatch(citation <- scythe::citation_search(doi_error_scopus[i], "scopus"),
error = function(err) {
data.frame("article_id" = NA,
"article_title" = NA,
"dataset_id" = doi_error_scopus[i],
"source" = paste0("scopus ", as.character(err)))
}
)
citations_error_scopus <- rbind(citations_error_scopus, result)
}
}, title = paste0("scopus error citation search ", Sys.time()))
# save search results from errored DOI
write.csv(citations_error_scopus,
file.path(output_directory, paste0("scythe_", date, "_scopus_error.csv")),
row.names = F)
# 2022-07-14 scopus errors were incorporated into cits_scopus at some point. Not reflected in this code script.
######### PLOS
citations_error_plos <- data.frame()
job::job({
for (i in seq_along(doi_error_plos)) {
# access API keys within background job environment
keyring::keyring_unlock("scythe", key)
# suppress errors and continue loop iteration
result <- tryCatch(citation <- scythe::citation_search(doi_error_plos[i], "plos"),
error = function(err) {
data.frame("article_id" = NA,
"article_title" = NA,
"dataset_id" = doi_error_plos[i],
"source" = paste0("plos", as.character(err)))
}
)
citations_error_plos <- rbind(citations_error_plos, result)
}
}, title = paste0("plos error citation search ", Sys.time()))
# empty dataframe return means no citations found and no HTTP errors
```
## Analysis / Results
### Does addition of xDD improve quality & scope of ADC dataset citations?
*Does increasing the number of sources we are searching result in more complete coverage/quality?*
```{r read_saved_scythe_results, eval=T, message=FALSE, warning=FALSE}
#| code-fold: true
# read in saved scythe results for all sources `cits_source` objects created
# reduces dependency on global environment objects - can pick up analysis here instead of rerunning scythe. Add error re-run results if detected.
for(i in source_list){
path <- eval(parse(text = paste0("path_", i)))
if(file.exists(path)){
assign(paste0("cits_",i),
if(file.exists(paste0(path_error, i, "_err_res.csv"))){
rbind(read_csv(file.path(path)),
read_csv(file.path(paste0(path_error, i, "_err_res.csv"))))
} else(read_csv(file.path(path)))
)
} else{print(paste0(i, " saved scythe results do not exsist in output directory"))
}
}
# read in saved combined results if already exist, create and save if not
if(file.exists(path_all)) {
scythe_cit <- read_csv(path_all)
} else{
scythe_cit <- rbind(cits_scopus,
cits_springer,
cits_plos,
cits_xdd) %>%
filter(!is.na(article_id)) # remove NA/error observations
#grepl(dataset_id, pattern = "^10.18739.*")) # remove datasets not housed on the
write_csv(scythe_cit, path_all)
}
```
```{r raw_scythe_results, eval=T}
#| code-fold: true
#| label: tbl-raw
#| tbl-cap: "Raw Results from Scythe Search of ADC DOIs"
# create mini dataframe to populate total citations in summary table
scythe_total <- tibble("source" = "Total",
"num_cit" = length(scythe_cit$dataset_id),
"num_datasets" = length(unique(scythe_cit$dataset_id)))
# summary table + cheater total row
scythe_sum <- scythe_cit %>%
group_by(source) %>%
summarise("num_cit" = length(source),
"num_datasets" = length(unique(dataset_id))) %>%
rbind(scythe_total)
scythe_sum$source <- c("PLOS", "Scopus", "Springer", "xDD", "Total")
knitr::kable(scythe_sum,
col.names = c("Source", "Number of Citations", "Number of Datasets"))
```
#### Do citation sources overlap in coverage?
We evaluated the redundancy in dataset citations found among sources by matching citations between source search results. **A citation is defined by the unique combination of `article_id` and `dataset_id`**. Percent overlap is the total number of citations found in a source also found in a second source, divided by the total number of citation found within the source.
```{r source_overlap_figure, eval=TRUE}
#| code-fold: true
#| fig-cap: "Citation Source Overlap: Number of citations found in multiple sources and number of citations found uniquely in only one source."
#| label: fig-source-overlap
# summarize the sources that each citation is found in for table
overlap <- scythe_cit %>%
group_by(dataset_id, article_id) %>%
summarize(source_combination = paste(source, collapse = "&")) %>%
group_by(source_combination) %>%
summarize(n = n())
# Create euler diagram of overlap
# Color blind friendly color pallet
#show_col(viridis(30, option = "C"))
# viridis color palette
#overlap_color <- c("#AB2494FF", "#DE6065FF", "#FCA338FF", "#F0F921FF")
overlap_color <- c("#19B36A", "#B5E1E7", "#1B897E", "#7AFDB1")
ovrlp_vec <- setNames(overlap$n, as.character(overlap$source_combination))
fit <- euler(ovrlp_vec)
euler_fig <- plot(fit,
quantities = TRUE,
fills = list(fill = overlap_color),
labels = c("PLOS", "Scopus", "Springer", "xDD"))
euler_fig
ggsave(euler_fig,
filename = file.path(output_directory, paste0("scythe_", date, "overlap_fig.png")),
dpi = 600,
scale = 1,
units = "in",
width = 6)
#
```
```{r prct_overlap_table, eval=TRUE, echo=FALSE}
#| label: tbl-overlap
#| tbl-cap: "Percent overlap between Scythe sources"
# build dataframe with overlap calcs
scythe_overlap_sum <- scythe_cit %>%
mutate("citation_df" = ifelse(
source == "plos",
"cits_plos",
ifelse(
source == "scopus",
"cits_scopus",
ifelse(source == "springer", "cits_springer", "cits_xdd")
)
)) %>%
group_by(source, citation_df) %>%
summarise(
"total_citations" = n(),
"prct_in_plos" = calc_prct_overlap(eval(parse(text = citation_df)),
cits_plos),
"prct_in_scopus" = calc_prct_overlap(eval(parse(text = citation_df)),
cits_scopus),
"prct_in_springer" = calc_prct_overlap(eval(parse(text = citation_df)),
cits_springer),
"prct_in_xdd" = calc_prct_overlap(eval(parse(text = citation_df)),
cits_xdd)
) %>%
select(1, 3:7)
# read in saved overlap file, or write one
if(file.exists(path_overlap)){
overlap_table <- read_csv(file.path(path_overlap))
} else{
write_csv(scythe_overlap_sum, file.path(path_overlap))
overlap_table <- read_csv(file.path(path_overlap))
}
# Overlap table
overlap_table$source <- c("PLOS", "Scopus", "Springer", "xDD")
overlap_table[,3:6] <- sapply(overlap_table[,3:6], function(x) x*100)
knitr::kable(overlap_table,
col.names = c("Source", "Total Citations", "% in PLOS", "% in Scopus", "% in Springer", "% in xDD"),
digits = 1)
```
Scopus found `r overlap[[which(overlap$source_combination == "scopus"), "n"]]` unique citations not found in any other digital libraries. Springer found `r overlap[[which(overlap$source_combination == "springer"), "n"]]`, PLOS `r overlap[[which(overlap$source_combination == "plos"), "n"]]`, and xDD `r overlap[[which(overlap$source_combination == "xdd"), "n"]]` unique citations respectfully. The total number of unique citations returned by `scythe` is `r sum(overlap$n)`.
### Do Dataset Citations Differ Among Research Disciplines?
The Arctic Data Center uses a semantic ontology to classify academic disciplines of datasets. Datasets can be labeled with up to 5 disciplines. This enables datasets to be more easily found with search terms. The ADC's ontology can be found here: [https://bioportal.bioontology.org/ontologies/ADCAD/?p=classes&conceptid=root]().
::: callout-note
DOI (Digital Object Identifier) is one system of unique persistent identification (PID). Different PIDs may be used in different academic disciplines. For example, genetics/bioinformatic studies often use accession numbers from the GenBank repository to uniquely label DNA and protein sequences. This analysis is limited to citations specifically using DOIs; citations using dataset titles or other PIDs are not included.
:::
```{r discipline_analysis, eval=TRUE, warning = FALSE, message=FALSE}
#| code-fold: true
#| label: tbl-disc
#| tbl-cap: "Number of citations found by discipline"
# Meld manual dataset discipline categorization with Solr categories
disc_adc <- read_csv(file.path(output_directory,
paste0("solr_adc_", date, "_disc.csv"))) %>%
select(1,3:7) # remove data uploaded column
# read in manual dataset discipline classifications. Remove extra columns
disc_manual <- read_csv(file.path(data_dir, "adc-discipline-2022-10-27.csv"))[1:6]
colnames(disc_manual)[1] <- "dataset_id"
# merge discipline classifications, prioritize my own manual classifications for
# sake of this analysis.
# There is a better way to do this, I just need to get this done
disc_all <- full_join(disc_manual, disc_adc) %>%
mutate(disc_cat_1 = ifelse(is.na(disc_cat_1), category_1, disc_cat_1),
disc_cat_2 = ifelse(is.na(disc_cat_2), category_2, disc_cat_2),
disc_cat_3 = ifelse(is.na(disc_cat_3), category_3, disc_cat_3),
disc_cat_4 = ifelse(is.na(disc_cat_4), category_4, disc_cat_4),
disc_cat_5 = ifelse(is.na(disc_cat_5), category_5, disc_cat_5)
) %>%
select(-c(category_1, category_2, category_3, category_4, category_5)) %>%
mutate(dataset_id = sub(pattern = "^doi:{1}", dataset_id, replacement = "" ))
# assign dataset classifications to found scythe citations
scythe_cit_disc <- left_join(scythe_cit, disc_all) %>%
distinct(article_id, dataset_id, .keep_all = TRUE)
# transform 5 discipline classification columns into single column - multiple rows per dataset
scythe_cit_disc_l <- scythe_cit_disc %>%
pivot_longer(cols = 5:9, names_to = NULL) %>%
na.omit()
# summarize disc classifications - sum number of citations per category
cit_disc <- scythe_cit_disc_l %>%
group_by(value) %>%
summarise(n_cit = length(unique(article_id, dataset_id)))
knitr::kable(cit_disc,
col.names = c("Dataset Discipline", "Number of Citations")) #%>%
#kableExtra::scroll_box(width = "500px", height = "200px")
```
```{r sunburst_figure, eval = TRUE}
#| code-fold: true
#| fig-cap: "scythe dataset citations grouped by academic discipline."
#| label: fig-disc-sunburst
# academic discipline ontology fit to ADC datasets and scythe citations
source(file.path("./R/ontology_hierarchy.R"))
source(file.path("./R/sunburst_discipline.R"))
# all levels - hydrology broken into individual leaves
sun_all <- sunburstR::sunburst(sun_levels_all,
legend=FALSE,
#percent=TRUE,
count=TRUE,
color = adc_color)
sun_all
```
Of the `r nrow(scythe_cit_disc)` unique dataset citations found by `scythe`, datasets classified as `r cit_disc[[which(cit_disc$n_cit == max(cit_disc$n_cit)),"value"]]`, and `r cit_disc[[which(cit_disc$n_cit == max(cit_disc$n_cit[cit_disc$n_cit != max(cit_disc$n_cit)])),"value"]]` constituted the vast majority of citations. `r max(cit_disc$n_cit)`, and `r max(cit_disc$n_cit[cit_disc$n_cit != max(cit_disc$n_cit)])` citations respectively (@tbl-disc and @fig-disc-sunburst).
### Citations overtime
```{r citations_over_time, eval=T}
#| code-fold: true
#| fig-cap: "Number of dataset citations per dataset as related to number of days publically available on the ADC to date of analysis (2022-11-03)"
#| label: fig-cit_over_time
# analysis date as date object
date_date <- as.Date(date)
date_uploaded <- read_csv(file.path(output_directory,
paste0("solr_adc_", date, "_disc.csv"))) %>%
select(1:2) %>%
mutate(dateUploaded = as.Date(dateUploaded)) %>%
mutate(age = date_date - dateUploaded) %>%
mutate(dataset_id = sub(pattern = "^doi:{1}", dataset_id, replacement = "" ))
scythe_cit_date <- scythe_cit %>%
left_join(date_uploaded) %>%
na.omit(dateUploaded) # remove citations without date uploaded info
scythe_cit_date_sum <- scythe_cit_date %>%
group_by(dataset_id, age) %>%
summarize("num_cit" = length(article_id))
date_graph <- scythe_cit_date_sum %>%
ggplot(aes(x = age, y = num_cit)) +
geom_point() +
theme_classic() +
labs(x = "# days dataset has been available to analysis date",
y = "number of citations found by scythe")
date_graph
```
It looks like there was an event \~700 days. Possibly State of the Arctic report metatdata records showing up? almost 2 years ago? Overall does not appear to have an obvious relationship with time. Low number of citations per dataset mostly. 243 datasets had 888 citations found by scythe & had date_uploaded data from Solr.
### How many citations found by `scythe` are already known to DataOne Metrics Service?
`do_cit_src_07` came from Rushiraj in July. `do_cit_src_07` has the record of how citations entered the DataOne Metrics Service: `Crossref`, `Metrics Service Ingest`, and ORCID. `Metrics Service Ingest` is previous `scythe` runs. I cross referenced the `scythe` citation results with both DataOne metrics citation lists and look at the distribution of citation sources.
Is this going to be apart of AGU? Interesting to others?
```{r new_scythe_citations, eval = T}
#| code-fold: true
#| fig-cap: "Dataset Citation Reporting Sources From the DataOne Arctic Data Center Metrics Service"
#| label: fig-dataone-metric-report
do_cit_src_07 <- readr::read_csv(file.path(data_dir, "dataone_cits_report_2022_07_25.csv"))
# source_id = 'Unique identifier to the source dataset / document / article that cited the target dataset '
# target_id = 'Unique identifier to the target DATAONE dataset. This is the dataset that was cited.'
# clean up dataone citation reporter csv. Remove extra ' from character strings
do_cit_src_07 <- as.data.frame(lapply(do_cit_src_07, gsub, pattern = "*'*", replacement = ""))
# rename dataone metrics citations columns to match scythe results
# replace unique Orcid # with "ORCiD"
do_cit_src_07 %<>%
rename("article_id" = source_id,
"dataset_id" = target_id) %>%
mutate(reporter = sub("^http.*","ORCiD", do_cit_src_07$reporter))
do_cit_source_sum <- do_cit_src_07 %>%
group_by(reporter) %>%
summarise(num_cit = n())
do_cit_source_fig <- do_cit_source_sum %>%
ggplot(aes(reporter, num_cit)) +
geom_col() +
coord_flip() +
theme_minimal() +
theme(panel.grid.major.y = element_blank(),
axis.text.x=element_blank()) +
scale_y_continuous(limits = c(NA, 1300)) +
geom_text(aes(label = num_cit), hjust = -0.5) +
labs(x = "",
y = "Number of Citations",
caption = "Total citations count July 2022: 2035")
do_cit_source_fig
```
```{r scythe-already-in-dataone, eval=F}
#| code-fold: true
#| label: tbl-scythe-do-overlap
#| tbl-cap: "Citations found by scythe that were previously recorded in DataOne Metrics Service"
unique_citations <- scythe_cit %>%
distinct(article_id, dataset_id)
scythe_cit_new <- anti_join(unique_citations, do_cit_src_07, by = c("article_id", "dataset_id")) %>%
na.omit()
# have 642 new scythe citations not found in dataone metrics
# Citations in dataone metrics that also show up in latest scythe search `unique_citations`
# These are the dataone metrics and scythe overlap citaitons
scythe_in_dataone <- semi_join(do_cit_src_07, unique_citations, by = c("article_id", "dataset_id"))
scythe_in_do_sum <- scythe_in_dataone %>%
group_by(reporter) %>%
summarise(num_cit = n())
knitr::kable(scythe_in_do_sum,
col.names = c("Source", "Number of Citations"))
```
`scythe` found 1060 qunique new citations not currently in the DataOne Metrics Service in November 2022. Need to functionalize code for analysis between 2 run dates.
*This could be a figure - proportion columns*
*Query CrossRef to see if scythe results were reported*
## Possible Next Steps
- Dataset citations are rare, N of classifications varies widely, need to control for sampling biases <https://zenodo.org/record/4730857#.YoaQ2WDMKrM>
- Total number of citations is extremely useful. Ground truth analysis - for a small number of datasets manually search through literature for citations.
- Do usage metrics (downloads and views) correlate well with citation metrics?
- Network analysis