File size: 4,275 Bytes
aae9588
6d17f3c
aae9588
 
9cbf151
 
 
 
 
 
53443e2
5052eca
b948d46
5052eca
06e7f6b
c9b19b9
5052eca
 
00383fd
089134f
06e7f6b
 
 
 
 
 
b948d46
089134f
5052eca
 
 
 
 
 
 
53443e2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c9b19b9
53443e2
9751425
83c688f
9751425
 
 
 
 
83c688f
 
 
 
 
 
 
a3c01b0
83c688f
 
 
 
 
 
 
 
 
 
1b7bf17
 
 
 
 
83c688f
 
53443e2
 
aae9588
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
---
library_name: keras-hub
pipeline_tag: text-generation
---

Hey I am CosmoGemma 👋 I can answer cosmology questions from astroph.CO research articles.

This is a Gemma_2b_en fine-tuned on QA pairs (3.5k) generated from Cosmology and Nongalactic Astrophysics articles (arXiv astro-ph.CO) 
from 2018-2022 and tested on QA pairs (1k) generated from 2023 articles, scoring over 75% accuracy.



Example to run CosmoGemma locally:

Requirement:
```
keras==3.6.0
keras_nlp==0.15.1
python==3.10
```
If not available, install them using:
```
pip install -q -U keras-nlp
pip install -q -U "keras>=3"
```

Script:

```
import os

os.environ["KERAS_BACKEND"] = "jax"  # Or "torch" or "tensorflow".                                                                                                                          
# Avoid memory fragmentation on JAX backend.                                                                                                                                                
os.environ["XLA_PYTHON_CLIENT_MEM_FRACTION"]="1.00"

import keras
import keras_nlp

gemma_lm = keras_nlp.models.CausalLM.from_preset("hf://sultan-hassan/CosmoGemma_2b_en")
template = "Instruction:\n{instruction}\n\nResponse:\n{response}"

Question = "write your question here"

prompt = template.format(
  instruction=Question,                                                                   
  response="",
  )
out = gemma_lm.generate(prompt, max_length=1024)
ind = out.index('Response') + len('Response')+2
print ("Question:", Question)
print ("Answer:", out[ind:])
```


Training dataset


Dataset has been generated from the llama3.1:8b-instruct-fp16 model to generate QA pairs from abstracts of the Cosmology and Nongalactic Astrophysics articles (arXiv astro-ph.CO) 
from 2018-2022.

Examples for some questions from the training dataset:

```
Question: What are some common methods for model selection in astrophysics?
Answer: The goodness of fit, the likelihood ratio test, Bayesian model selection using Bayes factors, and the classical as well as the Bayesian information theoretic approaches.

Question: What type of coupling in inflationary models can affect the prediction of inflationary parameters?
Answer: Non-minimal coupling to gravity.

Question: What type of distribution is used to model the probability of non-linear density field?
Answer: A superposition of a Gaussian and a lognormal distribution.

Question: Can the shape of central cluster galaxies be used as a predictor of weak-lensing mass bias in individual clusters?
Answer: Yes, we find that on average, the lensing masses of clusters with the roundest / most elliptical 25% of BCGs are biased ~20% high / low compared to the average.

Question: What could be the cause of remaining excess power in a signal after foreground mitigation?
Answer: Residual foreground emission from sources or diffuse emission far away from the phase centre, polarization leakage, chromatic calibration errors, ionosphere, or low-level radio-frequency interference

Question: What is the precision of photometric redshift estimates for LRGs?
Answer: 0.02

Question: What is the form of the scaling relation used to calculate X-ray luminosity?
Answer: $L_{\rm{X}} \propto \text{A}_{\rm{X}}M_{\text{200c}}^{\text{B}_{\rm{X}}} E(z)^2 (1+z)^{\gamma_{\rm{X}}}$

```


This is a [`Gemma` model](https://keras.io/api/keras_nlp/models/gemma) uploaded using the KerasNLP library and can be used with JAX, TensorFlow, and PyTorch backends.
This model is related to a `CausalLM` task.

Model config:
* **name:** gemma_backbone
* **trainable:** True
* **vocabulary_size:** 256000
* **num_layers:** 18
* **num_query_heads:** 8
* **num_key_value_heads:** 1
* **hidden_dim:** 2048
* **intermediate_dim:** 32768
* **head_dim:** 256
* **layer_norm_epsilon:** 1e-06
* **dropout:** 0
* **query_head_dim_normalize:** True
* **use_post_ffw_norm:** False
* **use_post_attention_norm:** False
* **final_logit_soft_cap:** None
* **attention_logit_soft_cap:** None
* **sliding_window_size:** 4096
* **use_sliding_window_attention:** False

This model card has been generated automatically and should be completed by the model author. See [Model Cards documentation](https://huggingface.co./docs/hub/model-cards) for more information.